1 2 3 4 Previous Next

Geek Speak

2,762 posts

This week's Actuator comes to you from June, where the weather has turned for the better after what seems like an endless amount of rain. We were able to work in the yard and it reminded me that one of the best ways to reduce stress is some physical exercise. So wherever you are, get moving, even if it is a walk around the block.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


The Vehicle of the Future Has Two Wheels, Handlebars, and Is a Bike

Regular readers of the Actuator know I have a fondness for autonomous vehicles. This article made me rethink that our future may best be served with bicycles, at least in more urban areas.


SpaceX Starlink satellites dazzle but pose big questions for astronomers

Is there any group of people Elon Musk hasn’t upset at this point?


How much does it cost to get an employee to steal workplace data? About $300

And for the low price of $1,200, you can get them to steal all the data. This is why security is hard: because humans are involved.


Real estate title insurance company exposed 885,000,000 customers' records, going back 16 years: bank statements, drivers' licenses, SSNs, and tax records

Setting aside the nature of the data, much of which I believe is public record, I want you to understand how the breach happened: because of lousy code. Until we hold individuals, not just companies, responsible for avoiding common security practices, we will continue to suffer data breaches.


Bad metadata means billions in unpaid royalties from streaming music services

Each paragraph I read made me a little sadder than the one before.


Artificial Intelligence Isn’t Just About Cutting Costs. It’s Also About Growth.

Let the machines do the tasks for which they are able, freeing up humans to do tasks for which they are able. Yes, automation and AI is about growth, and about efficiency (i.e., cutting costs).


What 10,000 Steps Will Really Get You

I never knew why Fitbit chose 10,000 steps as a default goal, but this may explain why. I’m a huge advocate for finding ways to get extra steps into your day. I hope this article gets you thinking about how to get more steps for yourself each day.


Pictured here is six yards of gravel. Not pictured is another 12 yards of topsoil. All moved by hand. I need a nap.


By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article written by my colleague Jim Hansen. It seems that our BYO challenges are not over, and Jim offers some great steps agencies can take to help with these issues.


In 2017, the Department of Defense (DoD) released a policy memo stating that DoD personnel—as well as contractors and visitors to DoD facilities—may no longer carry mobile devices in areas specifically designated for “processing, handling, or discussion of classified information.”


For federal IT pros, managing and securing “allowable” personal and government devices is already a challenge. Factor in the additional restrictions and the real possibility that not everyone will follow the rules, mobile-device management and security can seem even more overwhelming.


Luckily, there are steps federal IT pros can take to help get a better handle on managing this seemingly unmanageable Bring Your Own Everything (BYOx) environment, starting with policy creation and implementation, and including software choices and strategic network segmentation.


Agency BYOx Challenges


Some agencies allow personnel to use their own devices, some do not. For those that do, the main challenges tend to be access issues: which devices are allowed to access the government network? Which devices are not?


For agencies that don’t, there’s the added challenge of preventing unauthorized use by devices that “sneak through” security checkpoints.


Implementing some of the below best practices to support your government cybersecurity solutions can help ensure complete protection against a BYOx threat.


Three-Step BYOx Security Plan


Step One: Train and Test


Most agencies have mobile device management policies, but not every agency requires personnel to take training and pass a policy-based exam. Training can be far more effective if agency personnel are tested on how they would respond in certain scenarios.


Effective training emphasizes the importance of policies and their consequences. What actions will personnel face if they don’t comply or blatantly break the rules? In the testing phase, be sure to include scenarios to help solidify personnel understanding of what to do when the solution may not be completely obvious.


Step Two: Access Control


Identity-based access management is used to ensure only authorized personnel are able to access the agency network using only authorized devices. Add a level of security to this by choosing a solution that requires two-factor authentication.


Additionally, be sure to create, maintain, and carefully monitor access-control lists to help ensure that users have access to only the networks and resources they need to do their jobs. When establishing these access control lists, include as much information as possible about the users and resources—systems and applications—they are allowed to access. A detailed list could aid in discovering and thwarting fraudulent access from a non-authorized device.


Step Three: Implement the Right Tools


Mobile phones are far and away today’s biggest BYOx issue for federal IT pros. As a result, access control (step two) is of critical importance. That said, ensuring the following basic security-focused tasks are being implemented is a critical piece of the larger security picture:


• Patch management – Patch management is a simple and effective security measure. Choose a product that provides automated patch management to make things even easier and keep your personnel’s devices patched, up to date, and free of vulnerabilities and misconfigurations.


• Threat detection – Users often have no idea their devices have been infected, so it’s up to the federal IT pro to be sure a threat detection system is in place to help ensure that compromised devices don’t gain access to agency networks.


• Device management – If a user tries to attach an unauthorized device to the network, the quicker the federal IT pro can detect and shut down access, the quicker a potential breach is mitigated.


Access rights management – Provisioning personnel, deprovisioning personnel, and knowing and managing their access to the critical systems and applications across the agency is necessary to help ensure the right access to resources is granted to the right people.




Sticking to the basics and implementing a logical series of IT and end user-based solutions can help reduce the risk of mobile technologies.


Find the full article on our partner DLT’s blog Technically Speaking.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Security chain link


In the second post in this information security in a hybrid IT world series, let’s cover the best-designed security controls and measures, which are no match for the human element.


“Most people don’t come to work to do a bad job” is a sentiment with which most people will agree. So, how and why do well-meaning people sometimes end up injecting risk into an organization’s hardened security posture?


Maybe your first answer would be falling victim to social engineering tricks like phishing. However, there’s a more significant risk: unintentional negligence in the form of circumventing existing security guidelines or not applying established best practices. If you’ve ever had to troubleshoot blocked traffic or user who can’t access a file share, you know that one quick fix is to disable the firewall or give the user access to everything. It’s easy to tell yourself you’ll revisit the issue later and re-enable that firewall or tighten down those share permissions. Later will probably never come, and you’ve inadvertently loosened some of the security controls.


It’s easy to blame the administrator who made what appears to be a short-sighted decision. However, human nature prompts us to take these shortcuts. In our days on the savannah, our survival depended on taking shortcuts to conserve physical and mental energy to get through the harsh times on the horizon. Especially on short-staffed or overwhelmed teams, you save energy in the form of shortcuts that let you move on to the next fire. For as many security issues that may exist on-premises, “62% of IT decision makers in large enterprises said that their on-premises security is stronger than cloud security,” according to Dimensional Research, 2018.The stakes are even higher when data and workloads move to the cloud, where your data exploits can have further reach.


In 2017, one of the largest U.S. defense contractors was caught storing unencrypted application credentials and sensitive data related to a military project on a public, unprotected AWS S3 instance. The number of organizations caught storing sensitive data in unprotected, public S3 instances continues to grow. However, dealing with the complexity of securing data in the cloud requires other tools for improving the security posture and helping to combat the human element in SaaS and cloud offerings: Cloud Access Security Brokers (CASBs).


Gartner defines CASBs as “on-premises, or cloud-based security policy enforcement points, placed between cloud service consumers and cloud service providers to combine and interject enterprise security policies as the cloud-based resources are accessed.” By leveraging machine learning, CASBs can aggregate and analyze user traffic and actions across a myriad of cloud-based applications to provide visibility, threat protection, data security, and compliance in the cloud. Also, CASBs can handle authentication/authorization with SSO and credential mapping, as well as masking sensitive data with tokenization.


Nifty security solutions aside, the best security tools for on-premises and off-premises are infinitely more effective when the people in your organization get behind the whole mission of what you are trying to accomplish.


Continuing user education and training is excellent. However, culture matters. Environments in which people feel they have a role in information security increase an organization’s security posture. What do you think are some of the best ways to change an organization’s culture when it comes to security?

We've established that choosing a one-size-fits-all solution won't work. So, what does work? Let's look at why we need tools in the first place, and what kind of work these tools take off our hands.


Two Types of Work

Looking at the kinds of work IT people do, there are two broad buckets. First, there's new work: the kind of work that has tangible value, either for the end customer or for the IT systems you operate and manage. This is the creative work IT people to do create new software, new automation, or new infrastructure. It’s work coming from the fingers of a craftsmen. It takes knowledge, experience, and creativity to create something novel and new. It's important to realize that this type of proactive work is impossible to automate. Consider the parallel to the manual labor artists and designers do to create something new; it's just not something a computer can generate or do for us.


Second, there's re-work and toil. These kinds of reactive work are unwanted. Re-work needs to be done to correct quality issues in work done earlier, like fixing bugs in software, improving faulty automation code, and mitigating and resolving incidents on production. This also includes customer support work after incidents and fixing technical debt due to bad decisions in the past, or badly managing the software lifecycle. This leads to technical debt, outdated software, or systems and architectures that haven't been adapted to new ways of work, scalability, or performance requirements. For IT ops, physical machines, snowflake virtual machines, and on-premises productivity systems (like email, document management, or collaboration tools) are good examples.


How Do Tools Fit In?

Now that we understand the types of work we do, we can see where automation tools come in. They take away re-work and toil. A well-designed toolchain frees up software and infrastructure engineers to spend more time on net-new work, like new projects, new features, or improvements to architecture, systems, and automation. In other words: the more time you spend improving your systems, the better they'll get. Tools help you break the cycle of spending too much time fixing things that broke and not preventing incidents in the first place. Automation tooling helps remove time spent on repetitive tasks that go through the same process each time.


By automating, you're creating a representation of the process in code, which leads to consistent results each time. It lowers the variation of going through a process manually with checklists, which invariably leads to a slightly different process with unpredictable outcomes. It's easy to improve the automation code each time, which lowers the amount of re-work and faults each time you improve the code. See how automating breaks the vicious circle? Instead, the circle goes up and up and up with each improvement.


A proper toolchain increases engineering productivity, which in turn leads to more, better, and quicker improvements, a lower failure rate of those improvements, and a quicker time to resolving any issues.


How Do I Know If Work Is a Candidate for Automation?

With Value Stream Mapping, a LEAN methodology. This is a way of visualizing the flow of work through a process from start to finish. Advanced mappings include red and green labels for each step, identifying customer value, much like the new work and re-work we talked about earlier. Good candidates include anything that follows a fixed process or can be expressed as code.


It's easy to do a VSM yourself. Start with a large horizontal piece of paper or Post-It notes on a wall, and write down all the steps chronologically. Put them from left to right. Add context to each step, labeling each with green for new work or red for toil. If you're on a roll, you can even add lead time and takt time to visualize bottlenecks in time.


See a bunch of red steps close to each other? Those are prime candidates for automation.


Some examples are:

  1. If a piece of software is always tested for security vulnerabilities
  2. If you make changes to your infrastructure
  3. If you test and release a piece of new software using a fixed process
  4. If you create a new user using a manual checklist
  5. If you have a list of standard changes that can go to production after checking with the requirements of the standard change


But What Tools Do I Choose?

While the market for automation tooling has exploded immensely, there's some great resources to help you see the trees through the forest.

  1. First and foremost: keep it simple. If you use a Microsoft stack, use Microsoft tools for automation. Use the tool closest to the thing you're automating. Stay within your ecosystem as a starting point. Don't worry about a tool that encompasses all the technology stacks you have.
  2. Look at overviews like the Periodic Table of DevOps Tools.
  3. Look at what the popular kids are doing. They're usually doing it for a reason, and tooling tends to come in generations. Configuration management from three generations ago is completely different than modern infrastructure-as-code tools, even if they do the same basic thing.


Next Up

Happy hunting for the tools in your toolchain! In the next post, I'll discuss a problem many practitioners have after their first couple of successful automation projects: tool sprawl. How do you manage the tools that manage your infrastructure? Did we just shift the problem from one system to another? A toolchain is supposed to simplify your work, not make it more complex. How do you stay productive and not be overloaded with the management of the toolchain itself? We'll look at integrating the different tools to keep the toolchain agile as well as balancing the number of tools.

Hey THWACKers! Welcome back for week 2 in machine learning (ML). In my last post, Does Data Have Ethics? Data Ethic Issues and Machine Learning, you may have noticed I mentioned "evil" four times, but also mentioned "good" four times. Well, you're in luck. After all that talk about evil and ethics, I want to share with you some good that's been happening in the world.


But who can talk about goodness, without mentioning the dark circumstances “the machines” don't want you to know about?


For those who aren't familiar, the Library of Alexandria was a place of wonder, a holder of so much knowledge, documentation, and so much more. But what happened? It was DESTROYED.


In preparation for this topic, and because I wanted to mention some very specific library destructions over the years, I found this great source on Wikipedia so you can see just how much of our history has been lost.


Some notable events were:

  • ALL the artifacts, libraries, and more destroyed by ISIS
  • The 200+ years’ worth of artifacts, documents, and antiquities destroyed in the National Museum of Brazil fire
  • The very recent fire at Notre Dame, where the fires are hardly even out while this topic smolders within me
  • The Comet Disaster that breaks off and destroys this sleepy Japanese town every 1,200 years (OK, so this one’s from an anime movie, but natural disasters are disasters all the same.)



Image: Screen capture from the movie “Your Name” (Original title: Kimi no na wa) 50:16



But how can machine learning help with this? Because I'm sure you all think “the machines” will cause the next level of catastrophe and destruction, right?


I’d like to introduce you to someone I'm honored to know and whose work has inspired growth, change, and not only can be used to preserve the past, but will enlighten the future.

This inspiration is Tkasasagi, who has been setting the ML world on fire with natural language processing and evolutionary changes to the translation of Ancient, Edo era, and cursive Hiragana.


To give you a sense of the significance of this, there's a quote from last June, "If all Japanese literature and history researchers in the whole country help transcribing all pre-modern books in Japan, it will only take us 2000 years per persons to finish it."


Let's put that into perspective—there are countless tomes of knowledge, learning, information, education, and so much more that documents the history and growth of Japanese culture and nation. An island nation in a region with some of the most active volcanoes and frequent earthquakes in the world. It's only a matter of time before more of this information suffers from life's natural disasters and gets lost to the winds of time. But what can be done about this? How can this be preserved? That's exactly the exciting piece that I'm so happy to share with you.


  Here in the first epoch of this transcription project, machine learning does an OK job… but is it a complete job? Not even in the least. But fast forward to a few weeks later, and the results are staggering and impressive (even if nowhere near complete). 


Images: https://twitter.com/tkasasagi/status/1036094001101692928

Now some of you may feel (justifiably so) that this is an impressive growth in such a short amount of time, and I would agree.  Not to mention the model is working with >99% accuracy at this point which is impressive in its own right.

Image: https://twitter.com/tkasasagi/status/1115862769612599296


But the story doesn't end there—it continues literally day by day. (Feel free to follow Tkasasagi and learn about these adventures in real time.)


Every day, every little advancement in technologies like this through natural language processing (NLP), computer vision (CV), and convolutional neural networks (CNN) continue to grow the entire industry as a whole, where you and I, as consumers of this technology, will eventually find our everyday activities to be easier, and one day will just be seen as commonplace. For example, how many of you are using, or have used, the image language translate function of Google Translate to help display another language, or used WeChat's natural conversion of Chinese into English or vice-versa?


We are leap-years beyond where we were just a few years ago, and every day, it gets better, and efforts like these just continue to make things better, and better, and better.


How was that for using our machines for good and not the darkest of evils? I'm excited—aren't you?

I hope everyone had a wonderful holiday weekend, surrounded by friends and family. Summer is upon us, finally, which means more yard work to get done here.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


5G could mean less time to flee a deadly hurricane, heads of NASA and NOAA warn

This is not a new debate, but as 5G gets closer to being a reality, the debate is getting louder.


London Underground passengers told to turn off their Wi-Fi if they don’t want to be tracked

Love the idea of using data in a smart way. And I like how they are upfront in telling you they are collecting data. Now, why is this not an opt-in? Seems rather odd, in the land of GDPR, that they are not required to get consent first.


US Postal Service will use autonomous big rigs to ship mail in new test

We continue to inch closer to autonomous vehicles becoming a reality, and at the same time making the movie Maximum Overdrive a possible documentary.


Facebook plans to launch 'GlobalCoin' cryptocurrency in 2020

Well, now, what’s the worst that can happen?


The blue light from your LED screen isn’t hurting your eyes

Maybe not, but screen protectors, dark themes, and looking away frequently aren’t bad ideas.


Building a Talent Pipeline: Who’s Giving Big for Data Science on Campus?

I remember 10 years ago when a colleague told our group “…a data scientist isn’t a real job and won’t exist in five years.” Well, it’s now a job that is seeing money pouring into higher education. There is a dearth of people in the world that can analyze data properly. Here’s hoping we can fix that.


Falsehoods programmers believe about Unix time

Time zones are horrible creatures, and often the answer you hear is to use UTC for everything. But even UTC has flaws.


Summer has started; we were able to enjoy time outdoors with friends and family:


By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article by my colleague Mav Turner. He explores aspects of maintaining performance and security of the Army’s Command Post Computing Environment.


The modern military landscape requires a network portable enough to be deployed anywhere, but as reliable as a traditional network infrastructure. As such, the Department of Defense (DoD) is engaged in an all-out network modernization initiative designed to allow troops everywhere, from the population-dense cities in Afghanistan to the starkly remote Syrian Desert, to access reliable communications and critical information.


The Army’s Command Post Computing Environment (CP CE), designed to provide warfighters with a common computing infrastructure framework regardless of their location, is a perfect example of mobile military network technology in action. The CP CE integrates a myriad of mission command capabilities into, as the Army calls it, “the most critical computing environment developed to support command posts in combat operations.”


Modern warfighters can’t take their entire network operations with them into theater, but they want to feel like they can. Increasingly, the armed forces are leaving their main networks at home and carrying smaller footprints wherever the action takes them. These troops are expecting the same quality of service that their non-tactical networks deliver.


Beyond Traditional Network Monitoring


The complexity of networks like CP CE can make network monitoring for government agencies more critical, but it also poses significant troubleshooting and visibility challenges. Widely distributed networks can introduce an increased number of elements to be monitored, as well as servers and applications. Administrators must be able to have an unfettered view into everything within these networks.


Monitoring processes must be robust enough to keep an eye on overall network usage. Soldiers in the field attempting to use the network to communicate with their command can find their communications efforts hampered by counterparts using the same network for video streaming capabilities. Administrators need to be able to quickly identify these issues and pinpoint their origination points, so soldiers can commence with their missions unencumbered by any network pain points.


Securing Distributed Mobile Networks


Security monitoring must also be a top priority, but that becomes more onerous as the network becomes more distributed and mobile. Soldiers already use an array of communications tools in combat, and the number of connected devices is growing, thanks to the Army’s investment in the Internet of Battlefield Things (IoBT). Distributed networks operating in hostile environments can be prime targets for enemy forces, which can focus on exploiting network vulnerabilities to interrupt communications, access information, or even bring the network itself down.


Traditional government cybersecurity monitoring tools must also be scalable and flexible enough to cover the unique needs of the battlefield. Security and information event management (SIEM) solutions need to be able to detect suspicious activity across the entire network, however distributed it may be. Administrators should have access to updated threat intelligence from multiple sources across the network and be able to respond to potential security issues from anywhere at any time. Wherever possible, automated responses should be put in play to help mitigate threats and minimize their impact.


Soldiers in combat require immediate access to information, which in turn requires a dependable and secure network. To achieve that objective, administrators must have a system in place that allows them to quickly address problems and bottlenecks as they occur. It can mean the difference between making right or wrong decisions. Or, in the most extreme cases, the difference between life and death.


Find the full article on C4ISRNET.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

It’s late May 2019, which means I have been with SolarWinds for five years.
That’s the second-longest employment in my life.
I usually change jobs because of boreout, which happens when the employee engagement circle turns into an engagement spiral.


The good thing is; the chance of boredom here at SolarWinds is quite low, and this is the main reason:

That’s 38 badges, but not each trip “rewarded” a badge. I just checked our portal, and it shows 59 trips overall for me. Not bad, although for obvious reasons I visited Germany most of the time.

Some of these trips are still in my memory for various reasons. And that means a lot, as I tend to forget things the minute I turn around. Like… subnet calculation. I learned it, once. That’s all I remember.


So, let me walk down memory lane.


My very first trip (no badge, sorry) took place in October 2014, and I went to Regensburg, Germany, where one of our distributors, RamgeSoft, held a day event.
I had worked with SolarWinds for just five months and was supposed to speak in front of an audience who knew more about our products than I did at the time… that was fun! I met our MVP HerrDoktor there for the first time.


The next memorable trip happened in May 2015, when we organized an event in Germany for the first time. I went to Munich and Frankfurt with a group of five: two ladies and two gentlemen from Ireland who had never been to Germany, and me. We travelled with a vice president and he rented a Mercedes for us, but I was asked to drive as, according to them, traffic is on the wrong side of the road.
As they hadn’t been to Germany before, they obviously hadn’t seen a German Autobahn.
For the uninitiated: there’s no general speed limit.
I’ll never forget the VP sitting next to me shouting in a surprisingly high-pitched voice, “Sascha, I think that’s fast enough for me,” as I reached twice the speed limit of typical Irish roads.
Well, I had fun, and the guys in the back had fun as well.


Now, here’s a badge:


October 2016 in Stuttgart, Germany.

I remember it as the most boring show I ever attended.
I went there with a business partner, and it was just the two of us at the booth. Attendance overall was extremely poor. Think tumbleweeds. We started some innovative games with the other exhibitors to entertain ourselves, and I felt sorry for the attendees as everyone jumped at them, “PLEASE TAKE THIS PEN. AND THIS USB STICK. AND TAKE ME OUT OF HERE.”

I wanted to find out what became of that show, and I found an article from December 2016 stating they cancelled planning for 2017. “IT & Business” is no more. Rest in peace.

October 2016 in Berlin was my first event outside the private sector, all suited up! I wasn’t sure what to expect besides great food, as the event took place in Berlin’s most elegant hotel, but it turns out there isn’t much difference when talking to an IT pro working private or public—the problems are the same.


November 2016, a road trip to Germany with our distributor Ebertlang.
For a week and a half, we travelled through the country, a different hotel each night, different venue, different people, but the same program each day. I began to understand how it feels for a musician to be on tour.
My takeaway from this trip: waking up in the morning not knowing what city you’re in is weird.


Ah yes, February 2017, the mother of all trade shows!
While CLEUR took place in Berlin the year before, this time, I was the only German speaker in the SolarWinds booth, and people were queueing to talk to me. Next, next, next. The customer appreciation event was celebrated with an old school techno act on stage, but for whatever reason, our group ended up in an Irish pub, and I have no idea when we left. Patrick, do you remember?

No badge for the next trip, Istanbul in April 2017. My colleague Ramazan and I arrived on a Sunday, and I was shocked the moment we left the airport; tanks and army presence everywhere. It was the evening of a special election in Turkey, and I was a little nervous. Scratch the “little.” Fortunately, besides a few fires here and there, nothing serious happened, and the trip was enjoyable. Istanbul is a beautiful city, and the food is fantastic.


April 2017 in Gelsenkirchen, Germany.
This one was memorable as it is just 15 minutes away from where I was born and raised, and the city hosts the football team I supported while I was interested in football. Our partner Netmon24 organized the event, and the venue was an old coal mine that’s now a UNESCO world heritage site. The tour in the afternoon was cool. For whatever reason, I had never been there even though it was in my old neighborhood. I think we quite often ignore great things around us because we consider them as normal, without ever appreciating them.


September 2017 in Frankfurt, Germany
The venue was the local football stadium (no, I never supported that team), and I remember it because we found a small black box behind our booth an hour or two before the show closed. We asked, “What is that?” and opened it. A HoloLens, if you could believe it.
We went upstairs to ask the Microsoft guys if they had lost something, but they said it wasn’t theirs.
We asked the on-site security manager if someone had asked for a HoloLens, and he just replied, “Holowhat?” So, we finally dropped it at the reception desk of the organizer. Just after we played with it for…a bit!


In October 2017 I was flying to Mons, Belgium. Or rather, I was supposed to, but Ophelia said, “You’re not going to leave the country.” I managed to leave Cork on the last bus to Dublin, and the driver was fighting to keep the bus on the road because of the wind. Can you imagine what strong winds they were to do that to a bus? That was quite a ride!
By the time I arrived in Dublin, the airport had shut down. I stayed in Dublin overnight and managed to catch an overpriced flight the next morning, arriving at the venue an hour before the first day finished.
Now, while writing, I just remembered one more thing: they had a DeLorean at the show grounds. Not just a random one, but one with a flux capacitor between the seats and the signatures of Michael J. Fox and Christopher Lloyd. I loved it.


March 2018, Paris
I lived in Paris from 2005 – 2007, and it took me ten years to return for a couple of days.
The event itself was okay, quite busy, actually, but the usual problem in France is: if you don’t speak French, you’re lost, and I’ve lost most of my French in a decade. Mon Dieu.
What makes this show unforgettable was the location: Disneyland Europe. Disneyland closed a little earlier that day and opened again just for the attendees of the trade show. That was amazing! No queues. I repeat: no queues. I probably saw more attractions in those 2 – 3 hours than a tourist could in a full day. Just great.


No badge (well, I had one, but had to return it): August 2018, my first visit to our headquarters, and my first one to the U.S. in general.
First, the word “hot” should have more than three letters to express the heat in Texas.
Just don’t add an H behind it, as that would be wrong in so many ways.

There is so much space everywhere. The roads are so wide.
A single slot in a car park could fit three cars in Europe. 
I was seriously impressed. With the food…not so much.


October 2018, Dubai

GITEX was probably the most exciting show I’ve attended so far, as there was so much to see. It is a general technology show without a specific focus, just like the glorious days of CeBIT here in Europe. Unfortunately, the organizers didn’t provide badges that allowed me to join any of the talks as even they were much more interesting than usual.
The city of Dubai is quite fascinating as well. The heat is like Texas, but most of the sidewalks are air-conditioned. Shiny, modern, and high-tech everywhere…if you stay in the city center. Outside; not really.

Oh, and before I forget: I went to Salt Bae. It’s entertaining. Look it up.



April 2019 in Munich
Just a couple of weeks ago. Why will I remember that one? As we finished the presentations, the organizer invited everyone to a free-fall indoor skydiving event in a vertical wind tunnel. I had never done that before, and it was fun even though I wasn’t very talented, to say the least. A simple, but great experience.


Obviously, loads of other things happened, but THWACK isn’t the right audience to share them.
Also, I don’t even remember how many flights got delayed, how often I’ve had to stay a night somewhere unplanned, and how often the French air traffic controllers were on strike.


What’s coming next?


In June, I’ll add two more badges to the collection: InfoSec in London and Cisco Live! in San Diego.
I have a feeling San Diego might become a memorable event, too.

In my previous post, we talked about the CALMS framework as an introduction to DevOps, and how it's more than just “DevOps tooling.” Yes, some of it is about automation and what automation tools bring to the table, but it's what teams do with automation to quickly create more great releases with fewer and shorter incidents. Choosing a one-size-fits-all solution won't work with a true DevOps approach. For this, I'll briefly go into CALMS (but you can read more in my previous post) and the four key metrics to measure a team's performance. From there, we'll look at choosing the right tooling.



source: atlassian.com, the CALMS framework for DevOps

Image: Atlassian, https://www.atlassian.com/devops

Let's quickly reiterate CALMS:

  • Culture
  • Automation
  • Lean
  • Measurements
  • Sharing


These five core values are an integral part of high-performing teams. Successful teams tend to focus on these areas to improve their performance. What makes teams successful in the context of DevOps tooling, you ask? I'll explain.


Key Performance Metrics for a Successful DevOps Approach


Measuring a team's performance can be hard. You want to measure the metrics they can directly influence, but avoid being overly vague in measuring success. For instance, measuring the customer NPS involves much more than a single team's efforts, so that one team's efforts can get lost in translation. A good methodology of measuring DevOps performance comes from DevOps Research and Assessment, a company that publishes a yearly report on the state of DevOps, “Accelerate: State of DevOps 2018: Strategies for a New Economy.” They recommend using these four performance metrics:

Source: DORA Accelerate State of DevOps 2018


  • Deployment frequency: how often does the team deploy code?
  • Lead time for changes: how long does it take to go from code commit to code successfully running in production?
  • Time to restore service: how long does it take to restore service after an incident (like an unplanned outage or security incident)?
  • Change failure rate: what percentage of changes results in an outage?

Image: 2018 State of DevOps Report, https://cloudplatformonline.com/2018-state-of-devops.html


These metrics measure the output, like changes to software or infrastructure, as well as the quality of the output. It's vague enough to be broadly applicable, but concrete enough to be of value for many IT teams. Also, these metrics clearly embrace the core values from the CALMS framework. Without good post-mortems (or sprint reviews), how do you bring down the change failure rate or time to restore service? Without automation, how do you increase deployment frequency?


Choosing the right support infrastructure for your automation efforts is key to increasing performance, though, and a one-size-fits-all solution will almost certainly be counter-productive.


Why The Right Tool Is Vital

Each team is unique. Team members each have their own skills. The product or service they work on is built around a specific technology stack. The maturity of the team and the tech is different for everyone. The circumstances in which they operate their product or service and their dependencies are incomparable.


So what part of that mix makes a one-size-fits-all solution fit? You guessed it: none.


Add in the fact that successful teams tend to be nimble and quick to react to changing circumstances, and you'll likely conclude that most “big” enterprise solutions are incompatible with the DevOps approach. Every problem needs a specific solution.


I'm not saying you need to create a complex and unmanageable toolchain, which would be the other extreme. I’m saying there's a tendency for companies to buy in to big enterprise solutions because it looks good on paper, it’s an easier sell (as opposed to buying dozens of smaller tools), and nobody ever got fired for buying $insert-big-vendor-name-here.


And I'm here to tell you that you need to resist that tendency. Build out your toolchain the way you and your team sees fit. Make sure it does exactly what you need it to do, and make sure it doesn't do anything you don't need. Simplify the toolchain.


Use free and open-source components that are easier to swap out, so you can change when needed without creating technical debt or being limited by the big solution that won't let you use the software as you want it (a major upside of “libre” software, which many open-source is: you’re free to use it in a way that you intend, not in just the way the original creator intended).


Next Up

So there you have it. Build your automation toolchain, infrastructure, software service, or product using the tools you need, and nothing more. Make components easy to swap out when circumstances change. Don't buy into the temptation that any vendor can be your one-size-fits-all solution. Put in the work to create your own chain of tools that work for you.


In the next post in this series, I'll dive into an overview and categorization of a DevOps toolchain, so you'll know what to look out for, what tools solve what problems, and more. We'll use the Periodic Table of DevOps tools, look at value streams to identify which tools you need, and look at how to choose for different technology stacks, ecosystems, and popularity to solve specific problems in the value stream.

Welcome to the first in a five-part series focusing on information security in a hybrid IT world. Because I’ve spent the vast majority of my IT career as a contractor for the U.S. Department of Defense, I view information security through the lens that protecting national security and keeping lives safe is the priority. The effort and manageability challenges of the security measures are secondary concerns.



Photograph of the word "trust" written in the sand with a red x on top.

Modified from image by Lisa Caroselli from Pixabay.


About Zero Trust

In this first post, we’ll explore the Zero Trust model. Odds are you’ve heard the term “Zero Trust” multiple times in the nine years since Forrester Research’s John Kindervag created the model. In more recent years, Google and Gartner followed suit with their own Zero Trust-inspired models: BeyondCorp and LeanTrust, respectively.


“Allow, allow, allow,” Windows Guy must authorize each request. “It’s a security feature of Windows Vista,” he explains to Justin Long, the much cooler Mac Guy. In this TV commercial, Windows Guy trusts nothing, and each request requires authentication (from himself) and authorization.


The Zero Trust model kind of works like this. By default, nothing is trusted or privileged. Internal requests don’t get preference over external requests. Additionally, some other methods help enforce that Zero Trust model: least-privilege authentication, some strict access right controls, using intelligent analytics for greater insight and logging purposes, and additional security controls are the Zero Trust model in action.


If you think Zero Trust sounds like “Defense-in-Depth,” you are correct. Defense-in-Depth will be covered in a later blog post. As you know, the best security controls are always layered.


Why Isn’t Trust but Verify Enough?

Traditional perimeter firewalls, the gold standard for “trust but verify,” leave a significant vulnerability in the form of internal, trusted traffic. Perimeter firewalls focus on keeping the network free of that untrusted (and not authorized) external traffic. This type of traffic is usually referred to as “North-South” or “Client-Server.” Another kind of traffic exists, though: “East-West” or “Application-Application” traffic that probably won’t hit a perimeter firewall because it doesn’t leave the data center.


Most importantly, perimeter firewalls don’t apply to hybrid cloud, a term for that space where private and public network coalesce, or public cloud traffic. Additionally, while the cloud simplifies some things like building scalable, resilient applications, it adds complexity in other areas like network, troubleshooting, and securing one of your greatest assets: data. Cloud also introduces new traffic patterns and infrastructure you share with others but don’t control. Hybrid cloud blurs the trusted and untrusted lines even further. Applying the Zero Trust model allows you to begin to mitigate some of the risks from untrusted public traffic.


Who Uses Zero Trust?

In any layered approach to security, most organizations are probably already applying some of Zero Trust principles like multi-factor authentication, least-privilege, and strict ACLs, even if they haven’t reached the stage of requiring authentication and authorization for all requests from processes, users, devices, applications, and network traffic.


Also, the CIO Council, “the principal interagency forum to improve [U.S. Government] agency practices for the management of information technology,” has a Zero Trust pilot slated to begin in summer 2019. The National Institute of Standards and Technology, Department of Justice, Defense Information Systems Agency, GSA, OMB, and several other agencies make up this government IT security council.


How Can You Apply Principles From the Zero Trust Model?


  • Whitelists. A list of who to trust. It can specifically apply to processes, users, devices, applications, or network traffic that are granted access. Anything not on the list is denied. The opposite of this is a blacklist, where you need to know the specific threats to deny, and everything else gets through.

  • Least privilege. The principle in which you assign the minimum rights to the minimum number of accounts to accomplish the task. Other parts include separation of user and privileged accounts with the ability to audit actions.

  • Security automation for monitoring and detection. Intrusion prevention systems that stop suspect traffic or processes with manual intervention.

  • Identity management. Harden the authentication process with a one-time password or implement multi-factor authentication (requires proof from at least two of the following categories: something you know, something you have, and something you are).

  • Micro-segmentation. Network security access control that allows you to protect groups of applications and workloads and minimize any damage in case of a breach or compromise. Micro-segmentation also can apply security to East-West traffic.

  • Security defined perimeter. Micro-segmentation, designed for a cloud world, in which assets or endpoints are obscured in a “black cloud” unless you “need to know (or see)” the assets or group of assets.



Implementing any security measure takes work and effort to keep the bad guys out while letting the good guys in and, most importantly, keeping valuable data safe.


However, security breaches and ransomware attacks increase every year. As more devices come online, perimeters dissolve, and the amount of sensitive data stored online grows more extensive, the pool of malicious actors and would-be hackers increases.


It’s a scary world, one in which you should consider applying “Zero Trust.”

This week's Actuator comes to you direct from the Fire Circle in my backyard because (1) I am home for a few weeks and (2) it finally stopped raining. The month of May has been filled with events for me these past nine years, but not this year. So of course, the skies have been gray for weeks. We are at 130% rainfall year to date, and only one inch of rainfall between now and September 30th will set a new record.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


WhatsApp Finds and Fixes Targeted Attack Bug

I’m shocked, just shocked, to find out WhatsApp and Facebook may have intentionally been spying on their users.


Microsoft Reveals First Hardware Using Its New Compression Algorithm

And then they open sourced the technology, making it available for anyone to use, including AWS. More evidence that this is the new Microsoft.


Strong Opinions Loosely Held Might be the Worst Idea in Tech

Toxic Certainty Syndrome is a real problem, made worse by the internet. I’m not sure the proposed solution of offering percentages is the right choice for everyone, but I’m 100% certain we need to do something.


Amazon rolls out machines that pack orders and replace jobs

Amazon gets subsidies with the promise of creating jobs, then deploys robots to remove those same jobs.


San Francisco banned facial recognition tech. Here’s why other cities should too.

I’m with San Francisco on this, mostly due to the inherent bias found in the technology at the current time.


Gmail logs your purchase history, undermining Google’s commitment to privacy

Don’t be evil, unless you can get away with it for decades.


Selfie Deaths Are an Epidemic

Something, something, Darwin.


Thankful to have the opportunity to walk around Seattle after the SWUG two weeks ago:

As the public cloud continues to grow in popularity, it’s started to penetrate our private data centers and realize hybrid IT. More companies are adopting a hybrid IT model, and I keep hearing that we need to forget everything we know about infrastructure and start over when it comes to the public cloud. It's very difficult for me to imagine how to do this. I've spent the last fifteen years understanding infrastructure, troubleshooting infrastructure, and managing infrastructure. I've spent a lot of time perfecting my craft. I don't want to just throw it away. Instead, I’d like to think experienced systems administrators can bring their knowledge and build on their experience to bring value to a hybrid IT model. I want to explore a few areas where on-premises system administrators can use what they know today, build on that knowledge, and apply it to hybrid IT.




Monitoring is a critical component of a solid, functional data center. It's a function to inform us when critical services are down. It helps create baselines, so we know what to measure against and how to improve applications and services. Monitoring is so important that there are entire facilities, called Network Operations Centers (NOC), dedicated to this single function. Operations staff who know how to properly configure monitoring systems and hone in on not just critical services, but also the entire environment the application requires, provide value.


As we begin to shift workloads to the public cloud, we need to continue monitoring the entire stack on which our application lives. We'll need to start expanding our toolset to monitor these workloads in the cloud; trade in the ability to monitor an application service for being able to monitor an API. All public cloud providers built their services on top of APIs. Start becoming familiar with how to interact with an API. Change the way you think about up-and-down monitors. Monitor if the instance in the cloud is sized correctly because you're paying for both the size and the time that instance is running. We know what a good monitoring configuration looks like. Now we need to expand it to include the public cloud.




One of the biggest things to be aware of when it comes to networking and connecting a private data center with a public cloud provider is knowing there are additional networking fees. The cloud providers want businesses to move as much of their data as possible to the public cloud. As an incentive, they provide free inbound traffic transfers. To move your data out or across different regions, be aware that there are additional fees. Cloud providers have different regions all across the world and, depending on from where your data is out-bounding from, the public cloud migration costs may change. Additional charges may also be incurred from other services such as deploying an appliance or using a public IP address. These are technical skills upon which to build, and they are changing the way we think about networking when we apply them to hybrid IT.




As a virtualization administrator, you're very familiar with managing the hypervisor, templates, and images. These images are the base operating environment in which your applications run. We've spent lots of time tweaking and tuning these images to make our applications run as efficiently as possible. Once our images are in production, we have to solve how to scale for load and how to maintain a solid environment without affecting production. This ranges from rolling out patches to upgrading software.


As we move further into a hybrid IT model and begin to use the cloud providers’ tools, image management becomes a little easier. Most of the public cloud providers offer managed autoscaling groups. This is where resources will spin up or down automatically without you having to intervene based off a metric like CPU utilization. Some providers offer multiple upgrade rollout strategies to the autoscaling groups. These range from a simple canary rollout to upgrading the entire group at once. These new tools help scale our application demand automatically and have a simpler software rollout strategy.


Final Thoughts


I don't like the concept of having to throw away years of experience to learn this new model. Yes, the cloud abstracts a lot of underlying hardware and virtualization, but traditional infrastructure skillsets and experiences can still be applied. We will always need to monitor our applications to know how they work and interact with other services in the environment. We need to understand the differences in the cloud. Don't take for granted what we did in the private data center would be a free service in the public cloud. Understand that the public cloud is a business and while some of the services are free, most are not. Besides new network equipment costs or ISP costs, traditional infrastructure didn't account for the cost of moving data around inside the data center. I believe we can use our traditional infrastructure experiences, apply new knowledge to understand some of the differences, and build new skills towards the public cloud to have a successful hybrid IT environment.

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article on some of the security concerns that come along with containers. I was surprised to hear that a third of our customers are planning to implement containers in the next 12 months. The rate of change in IT never seems to slow down.


Open-source containers, which isolate applications from the host system, appear to be gaining traction with IT professionals in the U.S. defense community. Security remains a notable concern for a couple of reasons.


First, containers are fairly new, and many administrators aren’t completely familiar with them. It’s difficult to secure something you don’t understand. Second, containers are designed in a way that hampers visibility. This lack of visibility can make securing containers taxing.


Layers Upon Layers


Containers are comprised of a number of technical abstraction layers necessary for auto-scaling and developing distributed applications. They allow developers to scale application development up or down as necessary. Visibility becomes particularly problematic when using an orchestration tool like Docker Swarm or Kubernetes to manage connections between different containers, because it can be difficult to tell what is happening.


Containers can also house different types of applications, from microservices to service-oriented applications. Some of these may contain vulnerabilities, but that can be impossible to know without proper insight into what is actually going on within the container.


Protecting From the Outside In


Network monitoring solutions are ideal for network security geared toward identifying software vulnerabilities and detecting and mitigating phishing attacks, but they are insufficient for container monitoring. Containers require a form of software development life-cycle monitoring on steroids, and we aren’t quite there yet.


Security needs to start outside the container to prevent bad stuff from getting inside. There are a few ways to do this.


Scan for Vulnerabilities


The most important thing administrators can do to secure their containers is scan for vulnerabilities in their applications. Fortunately, this can be done with network and application monitoring tools. For example, server and application monitoring solutions can be used as security blankets to ensure applications developed within containers are free of defects prior to deployment.


Properly Train Employees


Agencies can also ensure their employees are properly trained and that they have created and implemented appropriate security policies. Developers working with containers need to be acutely aware of their agencies’ security policies. They need to understand those policies and take necessary precautions to adhere to and enforce them.


Containers also require security and accreditation teams to examine security in new ways. Government IT security solutions are commonly viewed from a physical, network, or operating system level; the components of software applications are seldom considered, especially in government off-the-shelf products. Today, agencies should train these teams to be aware of approved or unapproved versions of components inside an application.


Get CIOs on Board


Education and enforcement must start at the top, and leadership must be involved to ensure their organizations’ policies and strategies are aligned. This will prove to be especially critical as containers become more mainstream and adoption continues to rise. It will be necessary to develop and implement new standards and policies for adoption.


Open-source containers come with just as many questions as they do benefits. Those benefits are real, but so are the security concerns. Agencies that can address those concerns today will be able to arm themselves with a development platform that will serve them well, now and in the future.


Find the full article on American Security Today.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Today’s public cloud hyperscalers, such as Microsoft Azure, AWS, and Google, provide a whole host of platforms and services to enable organizations to deliver pretty much any workload you can imagine. However, they aren’t the be-all and end-all of an organization’s IT infrastructure needs.


Not too long ago, the hype in the marketplace was very much geared toward moving all workloads to the public cloud. If you didn’t, you were behind the curve. The reality is, though, it’s just not practical to move all existing infrastructure to the cloud. Simply taking workloads running on-premises and running them in the public cloud is considered by many to be the wrong way to do it. This is referred to as a “lift and shift.” That’s not to say that’s the case for all workloads. Things like file servers, domain controllers, line of business application servers, and so on tend to cost more to run as native virtual machines in the public cloud and introduce extra complexity with application access and data gravity.


The “Cloud-First” mentality adopted by many organizations is disappearing and gradually being replaced with “Cloud-Appropriate.” I’ve found a lot of the “Cloud-First” messaging has been pushed from the board level without any real consideration or understanding for what it means to the organization other than the promise of cost savings. Over time, the pioneers who adopted public cloud first have gained the knowledge and wisdom about what operating in a “Cloud-First” environment looks like. The operating costs don’t always work out as expected—and can even be more expensive.


Let’s look at some examples of what “Cloud-Appropriate” may mean to you. I’m sure you’ve heard of Office 365, which offers an alternative solution to on-premises workloads such as email servers and SharePoint servers, and offers additional value with tools like workplace collaboration via Microsoft Teams, task automation with Microsoft Flow, and so on. This Software as a Service (SaaS) solution, born in the public cloud, can take full advantage of the infrastructure that underpins it. As an organization, the cost of managing the traditional infrastructure for those services disappears. You’re left with a largely predictable bill and arguably superior service offering by just monitoring Office 365.


Application stack refactoring is another great place to think about “Cloud-Appropriate.” You can take advantage of the services available in the public cloud, such as highly performant database solutions like Amazon RDS or the ability to take advantage of public cloud’s elasticity to easily create more workloads in a short amount of time.


So where does that leave us? A hybrid approach to IT infrastructure. Public cloud is certainly a revolution, but for many organizations, the evolution of their existing IT infrastructure will better serve their needs. Hyper converged infrastructure is a fitting example of the evolution of a traditional three-tier architecture comprising of networking, compute, and storage. The services offered are the same, but the footprint in terms of space, cooling, and power consumption is lower while offering greater levels of performance, which ultimately offers better value to the business.



Further Reading

CRN and IDC: Why early public cloud adopters are leaving the public cloud amidst security and cost concerns. https://www.crn.com/businesses-moving-from-public-cloud-due-to-security-says-idc-survey

All too often, companies put the wrong people on projects, and all too often, the wrong people are involved with the project. We see projects where the people making key decisions lack a basic understanding of the technology involved. I’ve been involved with hundreds of projects in which key decisions for the technology portions are made by well-meaning people who have no understanding of what they are trying to approve.


For example, back in 2001 or 2002, a business manager read that XML was the new thing to use to build applications. He decided his department's new knowledge base must be built as a single XML document so it could be searched with the XML language. Everyone in the room sat dumbfounded, and we then spent hours trying to talk him out of his crazy idea.


I’ve worked on numerous projects where the business decided to buy some piece of software, and the day the vendor showed up to do the install was the day we found out about the project. The hardware we were supposed to have racked and configured wasn't there; nor were the crazy uptime requirements the software was supposed to have; not to mention the software licenses required to run the solutions were never discussed with those of us in IT.


If the technology pros had been involved in the process from the early stages of these projects, the inherent problems could have been surfaced much earlier, and led to those issues being mitigated before the go-live date. Typically, when dealing with issues like these, everyone on the project is annoyed, and that’s no way to make forward progress. The business is mad at IT because IT didn’t deliver what the vendor needed. IT is mad at the business because they found out they needed to provide a solution too late to ensure smooth installation and service turn-up. The company is mad at the vendor because the vendor didn’t meet the deadline the vendor was supposed to meet. The vendor is mad at the client for not having the servers the business was told they needed.


If the business unit had simply brought the IT team into the project earlier—hopefully much earlier—a lot of these problems wouldn’t have happened. Having the right team to solve problems and move projects through the pipeline will make everything easier and successfully complete projects. That’s the entire point: to complete the project successfully. The bonus to completing the project is that people aren’t angry at each other for entirely preventable reasons.


Having the right people on projects from the beginning can make all the difference in the world. If people aren’t needed on a project, let them bow out; but let them decide that they don’t need to be involved with the project. Don’t decide for them. By choosing for them, you can introduce risk to the project and end up creating more work for people. After the initial project meetings, put a general notice to the people on the project letting them know they can opt out of the rest of the project if they aren’t needed, but if they’re necessary, let them stay.


I know in my career I’ve sat in a lot of meetings for projects, and I’d happily sit in hundreds more to avoid finding out about projects at the last minute. We can make the projects that we work on successful, but only if we work together.


Top 3 Considerations to Make Your Project a Success

  • Get the right people on the project as early as possible
  • Let people and departments decide if they are needed on a project; don't decide for them
  • Don't shame people that leave a project because they aren't needed on the project team

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.