Skip navigation

OpenStack Summit 2016 reinforced my thoughts about OpenStack and its proliferation into mainstream IT organizations. First, OpenStack is much more than the glorified science project that some pundits put forth in 2015. Second, OpenStack is still not the frictionless consumption for every organization as some OpenStack vendors would have you believe. OpenStack is probably somewhere in the middle of this spectrum, with a majority of the interest lying in the OpenStack private cloud, containers, and Kubernetes.

 

The barrier to entry has decreased significantly since the days of extrapolating from industry journal articles. However, maintaining and sustaining a viable OpenStack ecosystem requires professional service-level skills. The devil is in the details, especially with all the customization, configurations, and permutations that OpenStack allows. It's one of those situations where great power requires great responsibility. For example, Day 1 OpenStack installation is now relatively easy, and is packaged as such. But where you go on Day 2 depends either on your organization's skill level, or what it can afford to pay. For some, that means nowhere, unless you have the proper support for training and gaining expertise. With that in mind, I'd like to share with the thwack community one of the best sessions at OpenStack Summit. It happened to be a hands-on workshop focused on getting started with OpenStack. It was delivered by Rackspace OpenStack Evangelist, Ken H.u.i. (Updated: apparently "h" "u" "i" is not allowed), and Red Hat Sr. Software Engineer and author, Dan Radez.

 

If OpenStack is on your list of deliverables as a proof-of-concept or in production this year, my advice is to get as much hands-on training and practical experience as you can possibly get. I also suggest leaning on the OpenStack community. There are many wonderful tech evangelists who are happy to share their time, knowledge, and expertise.

 

Getting Started with OpenStack with @kenhuiny and Dan Radez

Here are some prerequisites before you start out and here is the slides from the workshop.

 

Summer is almost upon us, and with it comes the promise of at least a week or two of that mythical stretch of time called "vacation," or for you non-U.S. readers, that government-guaranteed period called "holiday.” (No, I'm not the least bit jealous.)

 

When we finally escape our desks and the glare of fluorescent lights, we’ll likely find ourselves outside, peering up at the piercing day-star we so rarely see, wondering what to do that doesn't involve a keyboard or screen (unless that screen happens to be an e-reader of some sort).

 

Well, we here at SolarWinds are here to help. The Head Geekä team, along with a few of our friends and colleagues, have assembled a list of titles you are guaranteed to find thought-provoking, engaging, or just plain fun.

 

Be warned: this list is... well, we asked a bunch of geeks for their thoughts on fun books to read, so let's just call it comprehensive and leave it at that, shall we?

 

Take a look, dip into your Amazon credits, slather on some SPF 50, and start putting something more than network diagrams, PowerShellä verbs, and IOS commands into that brain of yours.

 

  • "Thing Explainer: Complicated Stuff in Simple Words" and "xkcd: volume 0" by Randall Munroe
    • Recommended by several of us.
    • Why: Randall Munroe, author of the popular Web comic xkcd, has a twisted sense of humor and the ability to take deeply technical concepts and put them into amusing contexts. Trust us. You’ll be copying pages from the book and posting them in your cube before you are halfway finished.
  • "The Decline and Fall of IBM: End of an American Icon?" by Robert X. Cringely
    • Recommended by Leon Adato, Head Geek.
    • Why: IBM is currently (and somewhat quietly) going through a sea change in its business. Understanding some of the background, history, and behind-the-scenes conversations about the company can help make sense of the headlines when they hit social media. Cringely has watched IBM rise and fall, and he’s the best prognosticator to foretell what might come next.
  • "Steve Jobs" by Walter Isaacson
    • Recommended by Lawrence Garvin, Head Geek.
  • "Zero Day" by Mark Russinovich
    • Recommended by Lawrence Garvin, Head Geek.
  • "Trojan Horse" by Mark Russinovich
    • Recommended by Lawrence Garvin, Head Geek.
  • "Daemon" and "Freedom" by Daniel Suarez
    • Recommended by Doug Johnson.
    • Why: Suarez takes current technology, extends it to an imaginable utility, and uses that world to build a thriller. Both books basically make up one story, and the ending of the first would be disappointing if the second book didn't exist. The story involves a dark net that a large group of people use to gain control over their lives against an oppressive government. The significance of this is that Suarez persuades the reader to imagine this becoming reality within the next ten years, not as an eventuality in some weird future dystopia.
  • "Being Digital" by Nicholas Negroponte
    • Recommended by Leon Adato, Head Geek.
    • Why: Back in the 1970s, before iTunesâ, Napsterâ, and the Internet, Mr. Negroponte envisioned a world where "bits" could be de-coupled from atoms and presented on their own. That world has come to be, and this work continues to be prescient in its understanding of the human and technical hurdles we face.
  • "Accidental Empires" by Robert X. Cringely
    • Recommended by Leon Adato, Head Geek.
    • Why: First published in 1992, this book explained the current state of computing through the lens of what came before, specifically the monumental contributions of the Xerox PARC project. While we've come a long way since TrueType, ethernet, and the mouse, it's still a wonderful foundation for people wondering what personalities and historical decisions came together to bring about Microsoftâ, Appleâ, and the rest of the Silicon Valley landscape.
  • "The Black Swan" by Nassim Taleb
    • Recommended by Leon Adato, Head Geek.
    • Why: Monitoring professionals make their living attempting to set up systems to catch nearly impossible scenarios, but we may not truly grasp the folly of that course of action. Mr. Taleb does a fantastic job of explaining why black swan events tend to be so attention-grabbing yet unimportant in the larger picture, and how we can refocus our efforts so that they have a greater impact.
  • “Super Mario: How Nintendo Conquered America” by Jeff Ryan
    • Recommended by Kevin Sparenberg, Technical Product Manager.
  • “The Smartest Guys in the Room: The Amazing Rise and Scandalous Fall of Enron” by Bethany McLean, Peter Elkind, and Joe Nocera
    • Recommended by Kate Asaf, Program Manager.
    • Why: I loved this book because I think the Enron scandal was all about hubris and thinking that being smart makes some people untouchable. The book has been updated to cover the trials as well.
  • “The Big Short: Inside the Doomsday Machine” by Michael Lewis
    • Recommended by Kate Asaf, Program Manager.
    • Why: I found this to be a bit of a slog to get into, but as boring as financial markets are, the author does a good job of intermingling some personal stories so you don’t totally fall asleep reading about default credit swaps.

 

Other recommendations (without attribution or explanation).

  • "Killing Monsters: Why Children Need Fantasy, Super Heroes, and Make-Believe Violence" by Gerard Jones
  • "Where Wizards Stay Up Late: The Origins of the Internet" by Katie Hafner and Matthew Lyon
  • "How to Create a Mind: The Secret of Human Thought Revealed" by Ray Kurzweil

Things I find amusing from around the Internet…

 

2015’s MVPs – The Most Vulnerable Players

For all the Microsoft® haters out there who believe Windows® is insecure when compared to other platforms, here is some data to help you get a feel for the truth: everything is broken.

 

The Costs of Running SQL Server on Linux

Yes, SQL Server® costs a bit of money, but with the move to Linux®, I wouldn’t be surprised to see the costs come down in favor of a subscription-based model. 

 

No Cybersecurity Training Required at Top U.S. Universities

Well, that might explain why there seems to be a lack of such experience among members of Congress, too. 

 

Titanic Sinking in Real-Time

For me, the best part of this video was the absence of Celine Dion songs.

 

Boaty McBoatface and the False Premise of Democracy

This is why we can't have nice things. Also, this is a great reminder at why crowdsourcing is often a horrible idea. None of us is as dumb as all of us.

 

Bigger Underwater Data Center in the Works

The future of the data center could mean lots of extra time spent on a beach for many of us. I like where this is headed.

 

How Much Would Darth Vader's Suit Cost?

Pretty sure the cost of the suit was covered through UnitedEmpire HMO premiums, but interesting to note the costs. I bet Elon Musk could build this for 1/10 the cost.

 

RDBMS Genealogy

For those of us who like to remember the good old days.

 

devops.jpg

Regardless of which new technologies federal network administrators adopt, they will always need dependable, consistent, and highly available solutions that keep their networks running -- constantly.

 

Sadly, that’s not always the reality.

 

Last year's survey of federal IT professionals by my company, SolarWinds, indicated that performance and availability issues continue to plague federal IT managers. More than 90 percent of survey respondents claimed that end-users in their organizations were negatively impacted by a performance or availability issue with business-critical technology over the past year, and nearly 30 percent of respondents claimed these issues occurred at least six times.

 

What can IT pros do about this?

 

Simplify

 

Don’t worry about deploying everything in one fell swoop. Instead, take a piecemeal approach. Focus on a single implementation and make sure that particular piece of technology absolutely shines. The trick to this strategy is keeping the big picture in mind as the individual pieces of technology are deployed.

 

Monitor

 

Network monitoring is a must. To do it properly, start with a baseline diagnostic that assesses the overall network performance, including availability and average response times. Once this baseline is established, look for anomalies, including configuration changes that other users may have made to the network. Find the changes, identify who made them, and factor their impact into the performance data as you identify problems and keep the network running.

 

Plan

 

Make no mistake: errors will happen, and it’s important to have a plan in place when things go south. That plan should be comprised of three facets: technology, people, and process.

 

First, a well-defined technology plan outlines how to best handle the different components of the network infrastructure, including monitoring and building in redundancies. That means having a backup for equipment that’s core to an agency’s network traffic.

 

Second, make sure the IT staff includes several people who share the same skillset and expertise. What happens if a key resource is out sick or leaves the organization? All of that expertise is gone, leaving a very big knowledge gap that will be hard to fill.

 

Third, develop a process that allows for rollbacks to prior configurations. That’s an important failsafe in case of a serious network error.

 

Interact

 

IT professionals need to understand organizational objectives to accomplish their own goals, which include optimizing and securing a consistently dependable network. Doing that is not just about technology. It also requires the ability to communicate freely with colleagues and agency leadership so that everyone is working toward the same goals.

 

CIOs must build a culture that is barrier-free and allows for regular interaction with other business leaders outside the technology realm. After all, isn’t that network or database that the IT staff manages directly tied to agency performance?

 

Having everything run perfectly all the time is an impossible dream. However, six nine’s of uptime is certainly achievable. All it takes is a little bit of simplification and planning, and a whole lot of technology and teamwork.

 

Find the full article on GNC.

 

Interested in this year’s cyber security survey? Go here.

The Root cause of the Silo thinking

I had to face the silo problem many times in the past. In IT many organizations work in silos. For example, the Server department only cares about the problems concerning Servers. When a Ticket with a problem

arrives often it starts a process that I call: "Ticket Ping Pong" Instead of solving the problem it is easier to forward the ticket to a different IT department and let them take care of it.

Some User Help Desks assign all tickets to the Networking Team because "It´s always the networks fault “. With that mindset people working in the UHD are in the position of proving that your system is not

responsible for causing the problem. But that isn´t the best approach in many cases. Problems could be solved quicker and more effectively if everybody would work together.

 

One Monitoring to rule them All

It is very common that every IT department has its own monitoring in place. Often a high specialized system that is directly coming from the hardware vendor. A shiny little box from the well trusted vendor they

have been using for ages. These systems have their benefits and are often a combination of management and monitoring. So for example for Server guys there are no problems unless they show up in

their monitoring systems. For a specific problem that is only related to one system that is working. But in the real world you are facing often more complex problems that are related to multiple systems. You need a

monitoring that covers all the complexity in your infrastructure and can handle different device types and services. The highly specialized vendor specific monitoring can coexist with that. But all the IT departments have to

break up the silos and start to work together. The general thinking should be we are all in the same boat. A monitoring project can build bridges and bring the different IT departments closer together.

The goal should be to have all systems in the environment on the same monitoring at the end of the day. That creates visibility and trust. When everybody is looking at the same monitoring they share the same knowledge of

what is going on when a ticket shows up. When the server admin sees in one pane of glass that the Firewall is running on 100% CPU utilization he knows how to address the ticket and that it is maybe a good idea to wait for a

feedback from the firewall guys.

In times of virtualization and SDN this is even more important. There are so many dependences between the different parts of the infrastructure that your initial entry point is hard to figure out. Sometimes the problem is hiding

behind the different layers of virtulization. It is a big effort to bring all the systems to centralized monitoring, but it absolutely worth the effort of doing it. At the end of the day all Software defined anything runs on hardware and

that hardware needs to be monitored.

kong.yang

OpenStack Summit 2016

Posted by kong.yang Employee Apr 22, 2016

This marks my first year attending OpenStack Summit. I am as giddy as I was when World of Warcraft first came out and everyone was racing to level 60. My hope is to learn and power level myself during my time at OpenStack. I will be leveraging my industry friends and the OpenStack community to fast track my working knowledge.

 

OpenStackSummit.PNG

 

OpenStack Summit is first come, first serve for most of the sessions, and RSVP for working/hands-on training sessions. It puts the onus on you to prepare, prioritize, and decide. And, in some cases, it's a decision between SMEs, who are presenting AWESOMESAUCE material in separate sessions at the same time. All that really means is that I'll have to hustle to engage with all the SMEs in the areas that interest me, such as containers, automation and orchestration, microservices, etc. Nothing like getting the 411 directly from the source.

 

Next Friday I'll blog about my observations and takeaways from OpenStack Summit and offer some tips on using and contributing to the OpenStack Community. If you're attending as well, let's try to meet up. It's a small IT world, after all.

 

What OpenStack projects are you currently working with or thinking about incorporating into your existing environment? Which ones excite you and why? Please comment below.

On Monday, we held our first Seattle #SWUG (SolarWinds User Group) at the Living Computer Museum.

thwacksters from the Pacific Northwest and as far as Missoula, Montana (shout out to edsando) gathered together and united in our geekiness and love of SolarWinds.

 

20160418_112923.jpg20160418_190411.jpg

 

In an effort to keep the conversation going and provide you with the valuable resources that were mentioned at the SWUG, here’s a recap of what went down:

 

MC for this SWUG: adatole

 

‘Product Roadmap Discussion + Q & A’

cobrien  Sr. Product Manager (Networks)

patrick.hubbard, Head Geek (Systems)

stevenwhunt, Sr. Product Manager (Systems)

     *Not in attendance, but can help with any follow up questions regarding systems products.

20160418_134433.jpg

 

Network products:

 

Systems products:

 

Additional Resources:

 

‘SolarWinds Best Practices & Thinking Outside the Box’

Dez, Technical Product Manager

KMSigma, Product Manager

 

For this session we asked attendees to vote on what they wanted to talk about. The top voted topic was “Alerting best practices: Leveraging Custom properties to reduce Alert Noise”. They talked about Muting alerts so you’re still getting the statistics, but you’re not getting alerted to things that don’t require immediate attention.

N_Mute , I_Mute, etc.

 

They also talked about how you can utilize custom properties to group alerts or send alerts to only certain teams in your organization.

 

The other noteworthy topic they covered was “Optimizing the SolarWinds implementation: Building Orion Servers based on Role”.

KMSigma made it look easy with this diagram:

swug- optimizing server.png

 

For more tips on how to optimize Orion, check out this video: Optimizing SolarWinds Orion - YouTube

 

Additional resources:

 

‘Customer Spotlight - How University of Washington Integrated the Syslog, Trap and Alert Manager’

RichardLetts, Network Operations Center Manager, SolarWinds MVP

 

Richard shared some interesting facts about UW (most surprising is that their budget is several billion dollars) and gave some useful tips on traps & alerts, and how to get started with SQL.

20160418_155135.jpg

 

Resource links:

 

‘Customer Spotlight - How Atmosera is Leveraging SolarWinds to Deliver a Superior Customer Experience’

byrona, Systems Engineer, SCP, SolarWinds MVP

jcheney, SVP Client Operations

 

Jared Cheney and Byron Anderson gave us an overview of Atmosera and their evolution as a company and discussed how SolarWinds helps meet their monitoring needs for Hybrid Cloud.

 

They also made the entire room jealous with their NOC views #NOCEnvy.

atmosera noc views.jpg

 

They’re using NPM, NCM, IPAM, SAM, LEM to support their customers every day and to ensure the health of their network and thousands of applications. Atmosera uses LEM with HIPAA/HITECH, PCI DSS 3.1, IRS1075, NIST 800-53, and several other compliance standards. They said LEM makes compliance audits a breeze with its secure collection and storage of logs, quick & easy reporting, and fast alert creation.

 

‘SolarWinds User Experience Dear New Orion User…’

kbongi, User Experience

 

In a nutshell, we want your feedback! Here’s how to get involved:

 

Thank you to everyone who attended! We really enjoyed meeting and speaking with each of you.

 

And last but not least, thank you to our sponsor and professional services partner, Loop1Systems (BillFitz_Loop1 and mrxinu) for booking the venue & hosting the happy hour!

 

If you left without filling out a survey, please help us out by telling us how we can make SWUGs even better>> 160418 Seattle SWUG Survey

 

If you’re in the Atlanta, GA area sign up for our upcoming SWUG>> June 9, 2016 - Atlanta, Georgia SWUG

TL'DR:  'Continuous Improvement' promotes a leaner cycle of data gathering and prioritized fixes which can deliver big benefits for Ops teams without large investments in organizational change.

People are thinking and talking about network automation more than ever before. There are a bewildering array of terms and acronyms bandied about. Whether people are speaking about DevOps or SDN, the central proposition is that you'll reach a nirvana of automation where the all the nasty grunt work is removed and our engineer time is spent, erm... engineering.

Yet many engineers and network managers are rejecting the notion of SDN and DevOps. These folk run real warts-and-all networks and are overwhelmed by the day to day firefighting, escalations, repetitive manual process, inter-departmental friction, etc. They can see the elegance and power of software defined networks and long for the smooth-running harmony of a DevOps environment. Most engineers can see the benefits of DevOps but see no path - they simply don't know how to get to the promised land.

Network equipment vendors purport to solve your management and stability problems by swapping your old equipment with newer SDN-capable equipment. Call me skeptical, but without better processes your shiny new equipment is just another system to automate and manage. I don't blame network equipment vendors for selling solutions, but it's unlikely their solution will solve your technical debt and stability issues. Until your operational and process problems are sufficiently well defined, you'll waste time and money hopelessly trying to find solutions.

DevOps is an IT philosophy that promises many benefits including, holistic IT, silo elimination, developers who are aware of operational realities, continuous integration, tighter processes, increased automation and so on. I'm a complete fan of the DevOps movement, but it requires nothing short of a cultural transformation, and that's kinda hard.

I propose that you start with 'Continuous Improvement', which is an extremely powerful component of the DevOps philosophy. You start by focusing your limited resources on high-leverage tasks that increase your view of the network. You gathering data and use it to identify your top pain point and you focus your efforts on eliminating your top pain point.  If you've chosen the top pain point, you should have enough extra hours to start the process again. In this virtuous circle scenarios you have something to show for every improvement cycle, and a network which become more stable as time passes.

Adopting 'Continuous Improvement' can deliver the fundamental benefits of DevOps without needing to bring in any other teams or engage in a transformation process.

Let's work through one 'Continuous Improvement' scenario:

  1. Harmonize SNMP and SSH access. The single biggest step you can take towards automation is to increase the visibility of your network devices. Inconsistent SNMP and SSH access to your network devices is one of the biggest barriers to visibility. Ensure you have correct and consistent SNMP configuration. Make sure you have TACACS or RADIUS authenticated SSH access from a common bastion. This can be tedious work, but it provides a massive return on investment. All of the other gains come from simplifying and harmonizing access to your network devices.
  2. Programmatically pull SNMP and Configurations. This step should be easy, just gather the running config and basic SNMP information for now. You can tune it all later.
  3. Analyze Analyze the configuration and SNMP data you gathered. Talk to your team and your customers about their pain points.
  4. Prioritize - Prioritize one high-leverage pain point that you can measure and improve. Don't pick the gnarliest problem. You should pick something that can be quickly resolved, but saves a lot of operational hours for your team. That is high-leverage.
  5. Eliminate the primary pain point Put one person on this task, make it a priority. You desperately need a quick win here.
  6. Celebrate Woot! This is what investment looks like. You spend some engineer hours but you got more engineer hours back.
  7. Tune up Identify the additional data would help make better decision for your next cycle and tune your management system to gather that extra data... then saddle up your horse and start again.

Summary

You don't need to overhaul your organization to see the benefits of DevOps. You can make a real difference to you teams operational load and happiness by focusing your limited resources on greater network visibility and intelligent prioritization. Once you see real results and buy yourself some breathing room, you'll be able to dive deeper into DevOps with a more stable network and an intimate knowledge of your requirements.

Things I find amusing from around the Internet…

 

DARPA Challenge Targets The Electromagnetic Spectrum

Forget about IPv4 running out, where will we get more EM spectrum? Sounds a bit ominous from the title, but read through and then think about how DARPA will be using machine learning to find a way to reduce bandwidth bottlenecks.

 

Introducing Application Insights Analytics

Ignoring their horrible choice of pie charts the fact is this system “ingests over 1 trillion events and 600TB a day”. That’s impressive, I don’t care who you are. And it’s interesting to note the steps Microsoft® is taking in the field of application analytics.

 

Startup Uses Mathematical Verification For Network Security

This takes software-defined networking to a whole new level. I think it’s a great step forward for network security, but it still won’t prevent an employee from falling victim to a phishing scam, or social engineering in general. Data just has a way of escaping, no matter what.

 

U.S. Textile Industry Turns to Tech as Gateway to Revival

How soon before we can change the color of our clothes to match our mood? And then, how long before someone hacks my shirt?

 

Data USA makes government data easier to explore

Because if there is one thing our government specializes in, it's making things easy to understand. But hey, if you are looking for data sets to get started with exploring some analytics tools, this is a good place to go shopping.

 

The 8-Bit Game That Makes Statistics Addictive

As if I needed more reasons to get excited about statistics. Whut? Am I the only one enjoying this game? OK then.

 

Should I be upset that the NYT credulously reviewed a book promoting iffy science?

Yes, yes you should. We *all* should be upset. Unfortunately, this is the world in which we live, when NYT articles put opinion on display and don't back it up with facts. Then again, if we wanted the facts we'd never allow someone like Dr. Oz to be so popular.

 

I never worry about database design

And neither should you.

 

CoddIsMyCopilot.jpg

As government agencies shift their focus to virtualization, automation and orchestration, cloud computing, and IT-as-a-Service, those who were once comfortable in their position as jacks-of-all-IT-trades are being forced to choose a new career paths to remain relevant.

 

Today, there’s very little room for “IT generalists.” A generalist is a manager who possesses limited knowledge across many domains. They may know how to tackle basic network and server issues, but may not understand how to design and deploy virtualization, cloud, or similar solutions that are becoming increasingly important for federal agencies.

 

But IT generalists can grow their careers and stay relevant. That hope lies in choosing between two different career paths: that of the “IT versatilist” or “IT specialist.”

 

The IT Versatilist

 

An IT versatilist is someone who is fluent in multiple IT domains. Versatilists have broadened their knowledgebase to include a deep understanding of several of today’s most buzzed-about technologies. Versatilist can provide their agencies with the expertise needed to architect and deliver a virtualized network, cloud-based services, and more.

 

Versatilists also have the opportunity to have to help their agencies move forward by mapping out a future course based on their familiarity surrounding the deployment of innovative and flexible solutions. This strategic support enhances their value in the eyes of senior managers.

 

The IT Specialist

 

Like versatilists, IT specialists have become increasingly valuable to agencies looking for expertise in cutting edge technologies. However, specialists focus on a single IT discipline, such as a specific application. For example, a specialist might have a very deep grasp of security or storage, but not necessarily expertise in other adjacent areas.

 

Still, specialists have become highly sought-after in their own right. A person who’s fluent in an extremely important area, like network security, will find themselves in-demand by agencies starved for security experts. This type of focus can nicely complement the well-rounded aspect that versatilists bring to the table.

 

Where does that leave the IT generalist?

 

Put simply – on the endangered list.

 

The government is making a major push toward greater network automation. Yes, this helps takes some items off the plates of IT administrators – but it also minimizes the government’s reliance on human interference. Those who have traditionally been “keeping the lights on” might be considered replaceable commodities in this type of environment.

 

If you’re an IT generalist, you’ll want to expand your horizons to ensure that you have a deep knowledge and expertise of IT constructs in at least one relevant area. Relevant disciplines will most likely center on things like containers, virtualization, data analytics, OpenStack, and other new technologies.

 

Training on these solutions will become essential, and you may need to train yourself. Attend seminars or webinars, scour educational books and online resources, and lean on vendors to provide additional insight and background into particular products and services.

 

Whatever the means, generalists must become familiar with the technologies and methodologies that are driving federal IT forward. If they don’t, they risk getting left out of future plans.

 

Find the full article on our partner DLT’s blog, TechnicallySpeaking.

Recently, a customer who’s already got a vSAN implementation in place, but only for testing purposes, asked me about the potential hazards regarding virtualizing some of their databases onto vSAN. To be fair, the initial conversation began with simply using VMware as a basis for some of their mission critical databases. Now, I’ve always been a V1 (Virtualize First) proponent, and can remember going back to my early days at EMC, wherein I did a well received presentation on virtualizing mission critical apps onto VMware. However, I’ve also been a realist in knowing that not every application should be a viable candidate for virtualization.

 

There are many reasons in which some apps may simply not be appropriate candidates to be VM’s. However, these have historically been due to functional anomalies, like hardware insurmountables like dongles, Unix operating systems like IAX or Solaris, or things like licensing characteristics wherein, for example, Oracle might demand licensing every socket in an entire environment in order to locate an oracle infrastructure into even only one segregated cluster. In the latter case, monetarily, this proved to be prohibitive. Incidentally, it seems as if Oracle is loosening their stance on this policy, and offering the option for the customer to prove the isolation of that cluster so that only the possible sockets on the hosts to which the Oracle app might be moved to be those that get licensed, thus alleviating a large amount of the cost which made it not cost-effective to do so.

 

I’ve long felt that even a single VM that consumes an entire ESX host would be preferable to standing that same machine up on bare metal. Things like uptime, vMotion, VMware snapshotting, etc. add so much functionality on an architectural level that to me, as an administrator to the VMware infrastructure, that it still was logical to virtualize.

 

We have so much power available now to allocate to individual VM’s, that virtualizing a database, even a high-transactional database, is not only acceptable, but preferable.

 

Adding vSAN into the equation, particularly today, with all its newer specifications, seems to me to be again, simply the logical choice. Now that all the IO issues are addressable, due to the option of an all SSD vSAN implementations, even mission critical databases have the ability to be delivered all the disc based IO that they require, along with the extended functionality of disc redundancy across hosts. vSAN not only improves the benefits that vSphere brings to the equation, but also increases that availability to an extent that hadn’t really ever been available previously.

 

VMware has some really solid reference architecture on virtualizing Oracle: Here

And Microsoft SQL Server (Albeit on the preceding version of vSAN, v.6.1):

Here

 

These are really excellent references for the design, of the environment, and also some fantastic design points to follow when implementing the databases themselves.

 

I recently had the benefit of a presentation in Palo Alto from the vSAN team, including the always entertaining, and hugely knowledgeable Rawlinson Rivera ( @PunchingClouds ) who briefed us in the Storage Field Day crew (#SFD9) on the benefits of vSAN, and the newest features on version 6.2. I would be happy to expound on these, but rather, would refer you to the following:


Here are all the presentations we received: Here

 

So, with that in mind, I have to say that to the question: Should I virtualize this database? The answer is Most Definitely.

I'm thrilled to be attending and engaging in Interop® 2016, which will be held at the Mandalay Bay Convention Center in Las Vegas from May 2-6. Even better, I'm honored to say I'll also be speaking at the event:

DART-Interop.PNG

IT pros need to bridge any technology construct to business utility. Utility manifests as disruptive innovation that will add value to the business and is usually reflected in applications. To realize maximum value, the end game for IT operations is enabling continuous integration and delivery of services. There are four skills that any IT professional can learn and use to enable disruptive innovation without causing disruption. This session will introduce the DART skills: Discovery, Alerting, Remediation, and Troubleshooting.

Attendees will receive:

    • A walk-through of the DART framework to deal with DevOps, Infrastructure as code, and Hybrid IT.
    • Examples of the skills applied to real-world application scenarios.
    • Best practice tips and techniques to maximize each skill's utility to the business.

 

P.S. I’ll be there with my fellow Head Geek, adatole. If you’ll be at Interop Las Vegas, let us know and we can schedule a meet up.

 

Here's my Interop 2016 schedule:

 

Monday:

Dark Reading Cyber Security Summit - Day 1

Monday, 8:30AM - 5:00PM

Speakers: Tim Wilson (Dark Reading), Michele Fincher (Social-Engineer)

Session Type: Summit

Track: Security

Hands-On Hacking

 

-OR-

 

Hands-On Hacking

Monday, 8:30AM - 5:00PM

Speaker: John H. Sawyer (InGuardians)

Session Type: Workshop

Track: Security

 

Tuesday:

Container Summit

Tuesday, 8:30AM - 5:00PM

Speakers: Bryan Cantrill, Jamie Dobson, Tianon Gravi, James Ford, Tom Jackson, Casey Bisson, Jason Mendenhall, Ken Owens, Victor Gajendran, Jane Arc, Paulo Pereira

Session Type: Summit

Track: Storage

 

-OR-

 

Dark Reading Cyber Security Summit - Day 2

Tuesday, 8:30AM - 5:00PM

Speakers: Tim Wilson (Dark Reading), Stuart McClure (Cylance), Andy Jordan (Bishop Fox), Bhaskar Karambelkar (ThreatConnect), Gunter Ollmann (Vectra Networks), David Bradford (Advisen), Chris Scott (Crowdstrike)

Session Type: Summit

Track: Security

 

Wednesday:

IDC at Interop Breakfast: Delivering Digital Transformation at Scale: Network Trends and Architectures

Wednesday, 7:15AM - 8:30AM

Session Type: Special Events

 

Hybrid Is the Assumption - So What's the Plan?

Wednesday, 11:45AM - 12:45PM

Speaker: Mark Thiele (Cloud Cruiser)

Session Type: Conference Session

Track: Cloud Connect

 

Trends in SaaS

Wednesday, May 4 | 2:15PM - 3:00PM

Speaker: Ryan Floyd (Storm Ventures)

Session Type: Conference Session

Track: Cloud Connect

 

Thursday:

            Containers, Orchestrators and Platforms: The Impact on Virtualization

Thursday, 10:30AM - 11:30AM

Speaker: Scott Lowe (VMware, Inc.)

Session Type: Conference Session

Track: Virtualization & Data Architecture

 

DART: An IT Skills Framework to Disrupt Without Disrupting

Thursday, 11:45AM - 12:45PM

Speaker: Kong Yang (SolarWinds)

Session Type: Conference Session

 

ABOUT INTEROP

Interop is the leading global IT infrastructure event series, offering in-depth education alongside a showcase of emerging technologies in an independent, vendor-neutral environment. For 30 years, Interop has brought the IT community together to explore the latest in network infrastructure, encouraging collaboration, and interoperability. Through dynamic conference programs, Interop helps professionals at all career levels leverage the network, systems and applications that enable business innovation. The Interop Expo and InteropNet Demo Lab provide immersive, hands-on experiences, while connecting enterprise IT buyers with leading suppliers. Interop Las Vegas is the flagship event held each spring, with an annual event in Tokyo and Cloud Connect China in Shanghai. For more information, visit interop.com. Interop is organized by UBM Americas, a part of UBM plc (UBM.L), an Events First marketing and communications services business. For more information, visit ubmamericas.com.

1602_BracketBattle_Captains_900x300.jpg

 

O Captain! My Captain! The battle is won,

The votes are in, the bracket is done.

Though the waters were murky, now they’re clear as crystal,

Without a doubt, the ultimate captain wields a blaster pistol.

Introducing the Captain of the Millennium Falcon, Han Solo.

In twelve parsecs or less, he’ll get you where you need to go!

 

Han Solo was the captain that everyone loved to hate in this bracket battle.

Somehow he managed to stomp out the competition in every round, and ultimately earned the title of The Most Legendary Captain of All Time!

 

1602_BracketBattle_Captains_final.jpg

 

What are your final thoughts on this year’s bracket battle?

 

Do you have any bracket theme ideas for next year?

 

Tell us below!

Welcome to the Actuator! I'm looking to make this a series of posts that provide links, musings, and snark from the series of tubes known as the Internet. If you don't know what an actuator is, you may be in the wrong place. You can show yourself out now.

 

Enjoy!

 

This is what happens when you reply to spam email

Reminds me of a similar conversation I had with a Mr. Yan years ago. He was disappointed that I did not show up at that hotel in Lagos. Twice.

 

How to avoid brittle code

“If it hurts, do it more often.” That’s the same advice I would give my players back in my days of coaching basketball. Anytime you find yourself out of your comfort zone, find a way to make things more comfortable. Applying this concept to development just makes sense to me (now, of course).

 

Developers can run Bash Shell and user-mode Ubuntu™ Linux® binaries on Windows 10

Coming on the heels of the SQL Server® on Linux announcement, this is something else that is making adatole cry tears of joy. At least I think they are tears of joy. Hard to tell from here.

 

Thanks For Ruining Another Game Forever, Computers

Has it been 20 years since Deep Blue? Nice recap of how computers are slowly ruining games for humans, and how.

 

Passwords, 2FA, and Your “Digital Legacy”

I’ve started using some password management software recently, but didn’t think about the implications of 2FA and what it means should a family member need to access my accounts. This post is a good reminder about the extra steps needed.

 

United Airlines implements web security based upon surveys of their users that are infected with keystroke logging.

I wish I were making this up, but it’s a real conversation folks.

 

Hacker reveals $40 attack that steals police drones from 2km away

I’d like to propose that we rename IoT the “Internet of Unsecured Things” (iOUT). Maybe that way we can educate people about the nature of connected systems.

 

Microsoft® launches Bot Framework to let developers build their own chatbots

I’ve seen this movie before. It doesn’t end well for the humans. Then again, maybe it does:

 

Screen Shot 2016-03-30 at 1.44.58 PM.png

The incorrect use of personal devices or the inadvertent corruption of mission-critical data by a government employee can turn out to be more than simple accidents. These activities can escalate into threats that can result in national security concerns.

 

These types of accidents happen more frequently than one might expect — and they’ve got government IT professionals worried, because one of the biggest concern continues to be threats from within.

 

In last year's cybersecurity survey, my company SolarWinds discovered that administrators are especially cognizant of the potential for fellow colleagues to make havoc — inducing mistakes. Yes, it’s true: government technology professionals are just as concerned about the person next to them making a mistake as they are of an external Anonymous-style group or a rogue hacker.

 

So, what are agencies doing to tackle internal mistakes? Primarily, they’re bolstering federal security policies with their own security policies for end users. This involves gathering intelligence and providing information and training to employees about possible entry points for attacks.

 

While this is a good initial approach, it’s not nearly enough.

 

The issue is the sheer volume of devices and data that are creating the mistakes in the first place. Unauthorized and unsecure devices could be compromising the network at any given time, without users even realizing it. Phishing attacks, accidental deletion or modification of critical data, and more have all become much more likely to occur.

 

Any monitoring of potential security issues should include the use of technology that allows IT administrators to pinpoint threats as they arise, so they may be addressed immediately and without damage.

 

Thankfully, there are a variety of best practices and tools that address these concerns and nicely complement the policies and training already in place, including:

 

  • Monitoring connections and devices on the network and maintaining logs of user activity to track user activities.
  • Identifying what is or was on the network by monitoring network performance for anomalies, tracking devices, offering network configuration and change management, managing IT assets, and monitoring IP addresses.
  • Implementing tools identified as critical to preventing accidental insider threats, such as those for identity and access management, internal threat detection and intelligence, intrusion detection and prevention, SIEM or log management, and Network Admission Control.

 

Our survey respondents called out each of these tools as useful in preventing insider threats. Together and separately, they can assist in isolating and targeting network anomalies. They can help IT professionals correlate a problem directly to a particular user. The software, combined with the policies and training, can help administrators attack issue before it goes from simple mistake to “Houston, we have a problem.”

 

The fact is, data that’s accidentally lost can easily become data that’s intentionally stolen. As such, you can’t afford to ignore accidental threats, because even the smallest error can turn into a very large problem.

 

Find the full article on Defense Systems.

 

Interested in this year’s cyber security survey? Go here.

We have been watching the spread of ransomware and this malware's success with increasing concern.

Hospitals appear to be of particular interest this year.

 

And who hasn't had a friend or colleague call in a panic this year already.

 

As many of you know, most ransomware gets onto the system through a phishing attack, so Adobe's emergency update earlier this week was concerning on multiple levels.

 

1 - Does this mean we can expect ransomware drive-by-downloads

2 - What is the next bug in Flash that will be exploited.

 

If you haven't read about this update yet, you can hit any of arstechnica, macrumors and the of course the popular press.

 

This patch includes updates to prevent the Cerber form of ransomware and the fact that it is an emergency patch means it's been seen in the wild.

If you haven't already done so, please update flash it's windows and macOS.

 

And share your experiences, as we all know with ransomware - either you have a backup or you payup

kong.yang

The CIO’s SLA

Posted by kong.yang Employee Apr 8, 2016

The CIO’s SLA is their service level agreement to their organization’s CEO, COO, and CFO. Heck, it’s their agreement to all of their org’s C-levels. For IT professionals, the SLA represents their CIO’s goals and objectives. And in this case, the SLA acronym is so apropos.

  • S stands for secure. Try not to get breached.
  • L stands for lean. Maximize ROI and do more with less.
  • A stands for agile. Quickly pivot on anything (technology, services, and IT processes) that will bring benefits to business operations.

 

These goals have tremendous impact on IT professionals and their daily modus operandi. But don't take my word for it; the SolarWinds 2016 IT Trends Report shows that IT professionals recognize these directives as the goals for success as they try to succeed in the Hybrid IT paradigm. According to IT professionals who responded in the IT Trends Report, the top three barriers to greater cloud adoption by weighted rank are security and compliance, the need to support legacy systems, and budget limitations. Security and compliance is the S component. Budget limitations represent the L component. The need to support legacy systems speaks to tech debt and adds to tech inertia, which is the A component.

 

These challenges that IT pros have to overcome align exactly with the CIO’s SLA. Check out the rest of the results from the SolarWinds 2016 IT Trends Report to learn more.

Does the CIO’s SLA align with your directives? Do you agree with the results of the IT Trends Report? Please comment below.

1602_BracketBattle_Captains_525x133.jpg

 

The Quartermaster round had a landslide victory on one side & a match-up that was almost too close to call on the other.

 

Han Solo continued his winning streak despite his mixed popularity after causing two huge upsets in rounds 2 & 3 (sorry Trekkies).

Captain Crunch didn’t go quietly though, he still managed to steal over 10% of the vote.

However, Crunch can’t take all the credit, most of his votes were probably from upset Picard & Kirk fans as evidenced by shuth’s comment: “I rebelled and voted for Crunch after the Picard vs Solo debacle!

 

Captain America has quietly moved his way through each round & manages to take out Captain Jack Sparrow as he heads into the finals.

cahunt would like to remind you to “Vote Captain America and make Thwack Great Again!!!!!

 

Did anyone predict this final match-up from the beginning?

 

It’s Han Solo vs Captain America battling it out in the final round!

 

O Captain! My Captain! The battle’s nearly done,

As our 4th bracket battle comes to a close, we hope you at least had fun.

Whether your captain carries a blaster pistol or shield,

It’s up to you to decide who you would follow into the battlefield.

Crown the ultimate captain we will—vote you must,

Don’t be late, the polls close April 10th at dusk.

 

Access the bracket and help us crown the ultimate captain HERE>>

sqlrockstar

Hello Again

Posted by sqlrockstar Employee Apr 6, 2016

tumblr_nxguc9OOG81qzr6b2o1_500.jpg

It's been a while since my last post. I could say something like "I've been busy," and while that would certainly be true, it doesn't excuse my silence here.

 

The biggest reason I haven't been saying a lot here is because I have no idea what I should be saying here. I'm stretched a bit thin between my blog, my newsletter, external publications, webinars, lab episodes, speaking engagements, and events in general. Oh, and throw in a healthy Twitter addiction, a Facebook page, Instagram, a Pinterest board, and even a G+ account that still exists and yeah, you might get an idea as to why I never seem to have the time to crank out a few hundred extra words.

 

At some point last week I found myself thinking about how I wish I had a place to try some easy, light posts. The kind of posts that I could put in a handful of places and make an effort to consolidate some streams.

 

So that brings me back here. I'm hopeful that I can keep some momentum going in this space. If you like what you read, please let me know, as that feedback will keep me going. It's encouraging when I put together a post and see that people enjoy my crazy thoughts. Look for more of the same, along with a dose of links and thoughts about data and the data industry.

 

Thumbs up, let's do this.

All too often, federal IT personnel misconstrue software as being able to make their agency compliant with various regulations. It can’t – at least not by itself.

 

Certainly, software can help you achieve compliance, but it should only be viewed as a component of your efforts. True and complete compliance involves defining, implementing, monitoring, and auditing processes so that they adhere to the parameters that have been set forth within the regulations. First and foremost, compliance requires strategic planning, which depends on people and management skills. Software complements this by being a means to an end.

 

To illustrate, let’s examine some regulatory examples:

 

  • Federal Information Security Management Act (FISMA): FISMA’s requirements call for agencies to deploy multifaceted security approaches to ensure information is kept safe from unauthorized access, use, disclosure, disruption, modification, and destruction. Daily oversight can be supported by software that allows teams to be quickly alerted to potentially dangerous errors and events.

 

  • Federal Risk and Authorization Management Program (FedRAMP): FedRAMP may be primarily focused on cloud service providers, but agencies have a role to ensure their providers are FedRAMP compliant, and to continually “assess, authorize and continuously monitor security controls that are the responsibility of the agency”. As such, FedRAMP calls for a combination of hands-on processes and technology.

 

  • Health Insurance Portability and Accountability Act (HIPAA): The response to HIPAA has typically centered on the use of electronic health records, but the Act requires blanket coverage that goes well beyond technology use. As such, healthcare workers need to be conscious of how patient information is shared and displayed.

 

  • Defense Information Systems Agency Security Technical Implementation Guides (STIGs): The STIGs provide guidelines for locking down potentially vulnerable information systems and software. They are updated as new threats arise. It’s up to federal IT managers to closely follow the STIGs to ensure the software they’re using is not only secure, but working to protect their systems.

 

Particular types of software can significantly augment the people and processes that support your compliance efforts, so take a closer look at the following tools:

 

  • Event and Information Management tracks events as they occur on your network and automatically alerts you to suspicious or problematic activity. This type of software uses intelligent analysis to identify events that are inconsistent with predetermined compliant behaviors, and is intelligent enough to issue alerts before violations occur.

 

  • Configuration Management allows for the configuration and standardization of routers, firewalls, and switches to ensure compliance. This type of software can also be useful in identifying potential issues that might adversely effect compliance before they come to pass.

 

  • Patch Management is critical for closing known vulnerabilities before they can be exploited. It can be very handy in helping your organization maintain compliance with regards to security and ensuring that all operating systems and applications are updated.

 

Each of the aforementioned types of software can form a collective safety net for FISMA compliance and serve as a critical component of a security plan, but they can’t be the only component if you’re to achieve your compliance goals. As the old saying goes, the rest is up to you.

 

Find the full article on our partner DLT’s blog, TechnicallySpeaking.

1602_BracketBattle_Captains_525x133.jpg

 

The Gunner round was full of surprises for the Elite 8.

Once again thwacksters were torn between Star Trek & Star Wars and once again, Star Wars prevailed!

Han Solo beat out Captain Picard  60/40 in a very heated match up.

 

Team Picard:

  • As much as I love Star Wars, if I had to choose which of these two Captains to serve under, it would have to be Jean-Luc.silverbacksays

 

Team Han Solo:

  • “Han Solo is a bad mama jama! It took a Sith lord to take him down, and he even had to drop his guard on purpose!vbetts

 

Which team are you? Are you rooting for or against the captain of the Millennium Falcon?

 

Round 3 Shutout:

 

There really weren’t any other nail-bitters this round, so we’ll go straight to the award for the biggest shutout of round 3.

 

Jack Sparrow vs Davy Jones: Jack sparrow wins this round with nearly 90% of the vote! I guess there was something to that mysterious jar of dirt…

silverbacksays called this one way before the polls closed: “It's got to be Cap'n Jack for this match up I think! After all, he did steal Jones' heart!

 

To see who else is advancing on in the next round, check out the updated bracket & start voting for the Quartermaster round!

We need your help & input as we get one step closer to crowning the ultimate captain!

 

Access the bracket and make your picks HERE>>

Filter Blog

By date: By tag: