DevOpsDays DC Denoument

DODDC_logo.pngThe SolarWinds booth at DevOpsDays DC represents SolarWinds' third appearance at the event (after Columbus and Austin (https://thwack.solarwinds.com/community/solarwinds-community/geek-speak_tht/blog/2016/05/23/devopsdays-daze) ). I could play up the cliche and say that the third time was the charm, but the reality that we who have attended - myself, Connie (https://thwack.solarwinds.com/people/ding), Patrick (https://thwack.solarwinds.com/people/patrick.hubbard), and Andy Wong - were charmed from the moment we set foot in the respective venues.

While Kong (https://thwack.solarwinds.com/people/kong.yang) and Tom (https://thwack.solarwinds.com/people/sqlrockstar) - my Head-Geeks-In-Arms - are used to more intimate gatherings, like VMUGs and SQL Saturdays, I'm used to the big shows: CiscoLive, InterOp, Ignite, VMWorld, and the like. DevOps Days is a completely different animal, and here's what I learned:

Focus

The people coming to DevOpsDays are focused. As much as I love to wax philosophical about all things IT, and especially about all things monitoring, the people who I spoke with wanted to stay on topic. That meant cloud, continuous delivery, containers, and the like. While it might have been a challenge for an attention-deficit Chatty Kathy like me, it was also refreshing.

There was also focus of purpose. DevOpsDays is a place where attendees come to learn, not to be marketed to (or worse, AT). So there are no scanners, no QR codes on the badge, nothing. People who come to DevOpsDays can't be guilted or enticed into giving vendors their info unless they REALLY mean it, and then it's only the info THEY want to give. Again, challenging, but also refreshing.

Conversations

That focus reaps rewards in the form of real conversations. We had very few drive-by visitors. People who approached the table were genuinely interested in hearing what SolarWinds was all about. They might not personally be using our software (although many were), but they were part of teams and organizations that had use for monitoring. More than once, someone backed away from the booth, saying, "Hang on. I gotta see if my coworkers know about this."

The conversations were very much a dialogue, as opposed to a monologue. Gone was the typical trade show 10-second elevator pitch. We got to ask questions and hear real details about people's environments, situations, and challenges. That gave us the opportunity to make suggestions, recommendations, or just commiserate.

Which meant I had a chance to really think about...

The SolarWinds (DevOps) Story

"So how exactly does SolarWinds fit into DevOps?" This was a common question, not to mention a perfectly valid one given the context. My first reaction was to talk about the Orion SDK  and how SolarWinds can be leveraged to do all the things developers don't really want to recreate when it comes to monitoring-type activities. Things like:

  • A job scheduler to perform actions based on date or periodicity.
  • Built-in account db that hands username/password combinations without exposing them to the user.
  • The ability to push code to remote systems, execute it, and pull back the result or return code.
  • Respond with an automatic action when that result or return code is not what was expected.

But as we spoke to people and understood their needs, some other stories emerged:

  • Using the Orion SDK to automatically add a system which was provisioned by chef, jenkins, or similar tools into monitoring.
  • Perform a hardware scan of that system to collect relevant asset and hardware inventory information.
  • Feed that information into a CMDB for ongoing tracking.
  • Scan that system for known software.
  • Automatically apply monitoring templates based on the software scan.

This is part of a continuous delivery model that I hadn't considered until digging into the DevOpsDays scene, and I'm really glad I did.

Attending the conferences and hearing the talks, I also believe strongly that traditional monitoring - fault, capacity, and performance - along with alerting and automation, are still parts of the culture that DevOps advocates and practitioners don't hear about often enough. And I'm submitting CFP after CFP until I have a chance to tell that story.

Is SolarWinds a hardcore DevOps tool? Of course not. If anything, it's a hardcore supporter in the "ops" side of the DevOps arena. Even so, SolarWinds tools have a valid, rightful place in the equation, and we're committed to being there for our customers. "There" in terms of our features, and "there" in terms of our presence at these conferences.

So come find us. Tell us your stories. We can't wait to see you there!

Parents
  • Very good adatole​ !

    • Using the Orion SDK to automatically add a system which was provisioned by chef, jenkins, or similar tools into monitoring.
    • Perform a hardware scan of that system to collect relevant asset and hardware inventory information.
    • Feed that information into a CMDB for ongoing tracking.
    • Scan that system for known software.
    • Automatically apply monitoring templates based on the software scan.

    This is good, but I feel there are a few caveats:

    There needs to be a way to correlate the resulting CI's from the CMDB to the objects in Orion for future reference for ticketing (ServiceNow for example).

    Regarding the known software, I know in our shop that can be challenging.  There can be several instances of a product such as Java.exe that run.  That by itself

    does not imply templates F-S get automatically applied.  This is because it is a multi-tenant environment where several applications run the same executable  but with different net names and log to different directory paths.  This is more the norm here and it is a case by case basis because the responsible groups vary. 

    Something to tie into this is the validation of DNS for forward and reverse lookup to the new object and to ensure there is not something already in Orion with that same ip that may not have been decommissioned or had it's ip changed.

    I see these as becoming more the norm as provisioning becomes more automated.

Comment
  • Very good adatole​ !

    • Using the Orion SDK to automatically add a system which was provisioned by chef, jenkins, or similar tools into monitoring.
    • Perform a hardware scan of that system to collect relevant asset and hardware inventory information.
    • Feed that information into a CMDB for ongoing tracking.
    • Scan that system for known software.
    • Automatically apply monitoring templates based on the software scan.

    This is good, but I feel there are a few caveats:

    There needs to be a way to correlate the resulting CI's from the CMDB to the objects in Orion for future reference for ticketing (ServiceNow for example).

    Regarding the known software, I know in our shop that can be challenging.  There can be several instances of a product such as Java.exe that run.  That by itself

    does not imply templates F-S get automatically applied.  This is because it is a multi-tenant environment where several applications run the same executable  but with different net names and log to different directory paths.  This is more the norm here and it is a case by case basis because the responsible groups vary. 

    Something to tie into this is the validation of DNS for forward and reverse lookup to the new object and to ensure there is not something already in Orion with that same ip that may not have been decommissioned or had it's ip changed.

    I see these as becoming more the norm as provisioning becomes more automated.

Children
No Data
Thwack - Symbolize TM, R, and C