Hey THWACKers! Welcome back for week 2 in machine learning (ML). In my last post, Does Data Have Ethics? Data Ethic Issues and Machine Learning, you may have noticed I mentioned "evil" four times, but also mentioned "good" four times. Well, you're in luck. After all that talk about evil and ethics, I want to share with you some good that's been happening in the world.

 

But who can talk about goodness, without mentioning the dark circumstances “the machines” don't want you to know about?

 

For those who aren't familiar, the Library of Alexandria was a place of wonder, a holder of so much knowledge, documentation, and so much more. But what happened? It was DESTROYED.

 

In preparation for this topic, and because I wanted to mention some very specific library destructions over the years, I found this great source on Wikipedia so you can see just how much of our history has been lost.

 

Some notable events were:

  • ALL the artifacts, libraries, and more destroyed by ISIS
  • The 200+ years’ worth of artifacts, documents, and antiquities destroyed in the National Museum of Brazil fire
  • The very recent fire at Notre Dame, where the fires are hardly even out while this topic smolders within me
  • The Comet Disaster that breaks off and destroys this sleepy Japanese town every 1,200 years (OK, so this one’s from an anime movie, but natural disasters are disasters all the same.)

 

 

Image: Screen capture from the movie “Your Name” (Original title: Kimi no na wa) 50:16

https://www.imdb.com/title/tt5311514/

 

But how can machine learning help with this? Because I'm sure you all think “the machines” will cause the next level of catastrophe and destruction, right?

 

I’d like to introduce you to someone I'm honored to know and whose work has inspired growth, change, and not only can be used to preserve the past, but will enlighten the future.

This inspiration is Tkasasagi, who has been setting the ML world on fire with natural language processing and evolutionary changes to the translation of Ancient, Edo era, and cursive Hiragana.

 

To give you a sense of the significance of this, there's a quote from last June, "If all Japanese literature and history researchers in the whole country help transcribing all pre-modern books in Japan, it will only take us 2000 years per persons to finish it."

 

Let's put that into perspective—there are countless tomes of knowledge, learning, information, education, and so much more that documents the history and growth of Japanese culture and nation. An island nation in a region with some of the most active volcanoes and frequent earthquakes in the world. It's only a matter of time before more of this information suffers from life's natural disasters and gets lost to the winds of time. But what can be done about this? How can this be preserved? That's exactly the exciting piece that I'm so happy to share with you.

 

  Here in the first epoch of this transcription project, machine learning does an OK job… but is it a complete job? Not even in the least. But fast forward to a few weeks later, and the results are staggering and impressive (even if nowhere near complete). 

 

Images: https://twitter.com/tkasasagi/status/1036094001101692928

Now some of you may feel (justifiably so) that this is an impressive growth in such a short amount of time, and I would agree.  Not to mention the model is working with >99% accuracy at this point which is impressive in its own right.

Image: https://twitter.com/tkasasagi/status/1115862769612599296

 

But the story doesn't end there—it continues literally day by day. (Feel free to follow Tkasasagi and learn about these adventures in real time.)

 

Every day, every little advancement in technologies like this through natural language processing (NLP), computer vision (CV), and convolutional neural networks (CNN) continue to grow the entire industry as a whole, where you and I, as consumers of this technology, will eventually find our everyday activities to be easier, and one day will just be seen as commonplace. For example, how many of you are using, or have used, the image language translate function of Google Translate to help display another language, or used WeChat's natural conversion of Chinese into English or vice-versa?

 

We are leap-years beyond where we were just a few years ago, and every day, it gets better, and efforts like these just continue to make things better, and better, and better.

 

How was that for using our machines for good and not the darkest of evils? I'm excited—aren't you?

I hope everyone had a wonderful holiday weekend, surrounded by friends and family. Summer is upon us, finally, which means more yard work to get done here.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

5G could mean less time to flee a deadly hurricane, heads of NASA and NOAA warn

This is not a new debate, but as 5G gets closer to being a reality, the debate is getting louder.

 

London Underground passengers told to turn off their Wi-Fi if they don’t want to be tracked

Love the idea of using data in a smart way. And I like how they are upfront in telling you they are collecting data. Now, why is this not an opt-in? Seems rather odd, in the land of GDPR, that they are not required to get consent first.

 

US Postal Service will use autonomous big rigs to ship mail in new test

We continue to inch closer to autonomous vehicles becoming a reality, and at the same time making the movie Maximum Overdrive a possible documentary.

 

Facebook plans to launch 'GlobalCoin' cryptocurrency in 2020

Well, now, what’s the worst that can happen?

 

The blue light from your LED screen isn’t hurting your eyes

Maybe not, but screen protectors, dark themes, and looking away frequently aren’t bad ideas.

 

Building a Talent Pipeline: Who’s Giving Big for Data Science on Campus?

I remember 10 years ago when a colleague told our group “…a data scientist isn’t a real job and won’t exist in five years.” Well, it’s now a job that is seeing money pouring into higher education. There is a dearth of people in the world that can analyze data properly. Here’s hoping we can fix that.

 

Falsehoods programmers believe about Unix time

Time zones are horrible creatures, and often the answer you hear is to use UTC for everything. But even UTC has flaws.

 

Summer has started; we were able to enjoy time outdoors with friends and family:

 

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Mav Turner. He explores aspects of maintaining performance and security of the Army’s Command Post Computing Environment.

 

The modern military landscape requires a network portable enough to be deployed anywhere, but as reliable as a traditional network infrastructure. As such, the Department of Defense (DoD) is engaged in an all-out network modernization initiative designed to allow troops everywhere, from the population-dense cities in Afghanistan to the starkly remote Syrian Desert, to access reliable communications and critical information.

 

The Army’s Command Post Computing Environment (CP CE), designed to provide warfighters with a common computing infrastructure framework regardless of their location, is a perfect example of mobile military network technology in action. The CP CE integrates a myriad of mission command capabilities into, as the Army calls it, “the most critical computing environment developed to support command posts in combat operations.”

 

Modern warfighters can’t take their entire network operations with them into theater, but they want to feel like they can. Increasingly, the armed forces are leaving their main networks at home and carrying smaller footprints wherever the action takes them. These troops are expecting the same quality of service that their non-tactical networks deliver.

 

Beyond Traditional Network Monitoring

 

The complexity of networks like CP CE can make network monitoring for government agencies more critical, but it also poses significant troubleshooting and visibility challenges. Widely distributed networks can introduce an increased number of elements to be monitored, as well as servers and applications. Administrators must be able to have an unfettered view into everything within these networks.

 

Monitoring processes must be robust enough to keep an eye on overall network usage. Soldiers in the field attempting to use the network to communicate with their command can find their communications efforts hampered by counterparts using the same network for video streaming capabilities. Administrators need to be able to quickly identify these issues and pinpoint their origination points, so soldiers can commence with their missions unencumbered by any network pain points.

 

Securing Distributed Mobile Networks

 

Security monitoring must also be a top priority, but that becomes more onerous as the network becomes more distributed and mobile. Soldiers already use an array of communications tools in combat, and the number of connected devices is growing, thanks to the Army’s investment in the Internet of Battlefield Things (IoBT). Distributed networks operating in hostile environments can be prime targets for enemy forces, which can focus on exploiting network vulnerabilities to interrupt communications, access information, or even bring the network itself down.

 

Traditional government cybersecurity monitoring tools must also be scalable and flexible enough to cover the unique needs of the battlefield. Security and information event management (SIEM) solutions need to be able to detect suspicious activity across the entire network, however distributed it may be. Administrators should have access to updated threat intelligence from multiple sources across the network and be able to respond to potential security issues from anywhere at any time. Wherever possible, automated responses should be put in play to help mitigate threats and minimize their impact.

 

Soldiers in combat require immediate access to information, which in turn requires a dependable and secure network. To achieve that objective, administrators must have a system in place that allows them to quickly address problems and bottlenecks as they occur. It can mean the difference between making right or wrong decisions. Or, in the most extreme cases, the difference between life and death.

 

Find the full article on C4ISRNET.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

It’s late May 2019, which means I have been with SolarWinds for five years.
That’s the second-longest employment in my life.
I usually change jobs because of boreout, which happens when the employee engagement circle turns into an engagement spiral.

 

The good thing is; the chance of boredom here at SolarWinds is quite low, and this is the main reason:

That’s 38 badges, but not each trip “rewarded” a badge. I just checked our portal, and it shows 59 trips overall for me. Not bad, although for obvious reasons I visited Germany most of the time.

Some of these trips are still in my memory for various reasons. And that means a lot, as I tend to forget things the minute I turn around. Like… subnet calculation. I learned it, once. That’s all I remember.

 

So, let me walk down memory lane.

 

My very first trip (no badge, sorry) took place in October 2014, and I went to Regensburg, Germany, where one of our distributors, RamgeSoft, held a day event.
I had worked with SolarWinds for just five months and was supposed to speak in front of an audience who knew more about our products than I did at the time… that was fun! I met our MVP HerrDoktor there for the first time.

 

The next memorable trip happened in May 2015, when we organized an event in Germany for the first time. I went to Munich and Frankfurt with a group of five: two ladies and two gentlemen from Ireland who had never been to Germany, and me. We travelled with a vice president and he rented a Mercedes for us, but I was asked to drive as, according to them, traffic is on the wrong side of the road.
As they hadn’t been to Germany before, they obviously hadn’t seen a German Autobahn.
For the uninitiated: there’s no general speed limit.
I’ll never forget the VP sitting next to me shouting in a surprisingly high-pitched voice, “Sascha, I think that’s fast enough for me,” as I reached twice the speed limit of typical Irish roads.
Well, I had fun, and the guys in the back had fun as well.

 

Now, here’s a badge:

 

October 2016 in Stuttgart, Germany.

I remember it as the most boring show I ever attended.
I went there with a business partner, and it was just the two of us at the booth. Attendance overall was extremely poor. Think tumbleweeds. We started some innovative games with the other exhibitors to entertain ourselves, and I felt sorry for the attendees as everyone jumped at them, “PLEASE TAKE THIS PEN. AND THIS USB STICK. AND TAKE ME OUT OF HERE.”

I wanted to find out what became of that show, and I found an article from December 2016 stating they cancelled planning for 2017. “IT & Business” is no more. Rest in peace.

October 2016 in Berlin was my first event outside the private sector, all suited up! I wasn’t sure what to expect besides great food, as the event took place in Berlin’s most elegant hotel, but it turns out there isn’t much difference when talking to an IT pro working private or public—the problems are the same.

 


November 2016, a road trip to Germany with our distributor Ebertlang.
For a week and a half, we travelled through the country, a different hotel each night, different venue, different people, but the same program each day. I began to understand how it feels for a musician to be on tour.
My takeaway from this trip: waking up in the morning not knowing what city you’re in is weird.

 

  
Ah yes, February 2017, the mother of all trade shows!
While CLEUR took place in Berlin the year before, this time, I was the only German speaker in the SolarWinds booth, and people were queueing to talk to me. Next, next, next. The customer appreciation event was celebrated with an old school techno act on stage, but for whatever reason, our group ended up in an Irish pub, and I have no idea when we left. Patrick, do you remember?


No badge for the next trip, Istanbul in April 2017. My colleague Ramazan and I arrived on a Sunday, and I was shocked the moment we left the airport; tanks and army presence everywhere. It was the evening of a special election in Turkey, and I was a little nervous. Scratch the “little.” Fortunately, besides a few fires here and there, nothing serious happened, and the trip was enjoyable. Istanbul is a beautiful city, and the food is fantastic.

 


April 2017 in Gelsenkirchen, Germany.
This one was memorable as it is just 15 minutes away from where I was born and raised, and the city hosts the football team I supported while I was interested in football. Our partner Netmon24 organized the event, and the venue was an old coal mine that’s now a UNESCO world heritage site. The tour in the afternoon was cool. For whatever reason, I had never been there even though it was in my old neighborhood. I think we quite often ignore great things around us because we consider them as normal, without ever appreciating them.

 


September 2017 in Frankfurt, Germany
The venue was the local football stadium (no, I never supported that team), and I remember it because we found a small black box behind our booth an hour or two before the show closed. We asked, “What is that?” and opened it. A HoloLens, if you could believe it.
We went upstairs to ask the Microsoft guys if they had lost something, but they said it wasn’t theirs.
We asked the on-site security manager if someone had asked for a HoloLens, and he just replied, “Holowhat?” So, we finally dropped it at the reception desk of the organizer. Just after we played with it for…a bit!

 

In October 2017 I was flying to Mons, Belgium. Or rather, I was supposed to, but Ophelia said, “You’re not going to leave the country.” I managed to leave Cork on the last bus to Dublin, and the driver was fighting to keep the bus on the road because of the wind. Can you imagine what strong winds they were to do that to a bus? That was quite a ride!
By the time I arrived in Dublin, the airport had shut down. I stayed in Dublin overnight and managed to catch an overpriced flight the next morning, arriving at the venue an hour before the first day finished.
Now, while writing, I just remembered one more thing: they had a DeLorean at the show grounds. Not just a random one, but one with a flux capacitor between the seats and the signatures of Michael J. Fox and Christopher Lloyd. I loved it.

 


March 2018, Paris
I lived in Paris from 2005 – 2007, and it took me ten years to return for a couple of days.
The event itself was okay, quite busy, actually, but the usual problem in France is: if you don’t speak French, you’re lost, and I’ve lost most of my French in a decade. Mon Dieu.
What makes this show unforgettable was the location: Disneyland Europe. Disneyland closed a little earlier that day and opened again just for the attendees of the trade show. That was amazing! No queues. I repeat: no queues. I probably saw more attractions in those 2 – 3 hours than a tourist could in a full day. Just great.

 

No badge (well, I had one, but had to return it): August 2018, my first visit to our headquarters, and my first one to the U.S. in general.
First, the word “hot” should have more than three letters to express the heat in Texas.
Just don’t add an H behind it, as that would be wrong in so many ways.

There is so much space everywhere. The roads are so wide.
A single slot in a car park could fit three cars in Europe. 
I was seriously impressed. With the food…not so much.

 

October 2018, Dubai

GITEX was probably the most exciting show I’ve attended so far, as there was so much to see. It is a general technology show without a specific focus, just like the glorious days of CeBIT here in Europe. Unfortunately, the organizers didn’t provide badges that allowed me to join any of the talks as even they were much more interesting than usual.
The city of Dubai is quite fascinating as well. The heat is like Texas, but most of the sidewalks are air-conditioned. Shiny, modern, and high-tech everywhere…if you stay in the city center. Outside; not really.

Oh, and before I forget: I went to Salt Bae. It’s entertaining. Look it up.

 

 


April 2019 in Munich
Just a couple of weeks ago. Why will I remember that one? As we finished the presentations, the organizer invited everyone to a free-fall indoor skydiving event in a vertical wind tunnel. I had never done that before, and it was fun even though I wasn’t very talented, to say the least. A simple, but great experience.

 

Obviously, loads of other things happened, but THWACK isn’t the right audience to share them.
Also, I don’t even remember how many flights got delayed, how often I’ve had to stay a night somewhere unplanned, and how often the French air traffic controllers were on strike.

 

What’s coming next?

 

In June, I’ll add two more badges to the collection: InfoSec in London and Cisco Live! in San Diego.
I have a feeling San Diego might become a memorable event, too.

In my previous post, we talked about the CALMS framework as an introduction to DevOps, and how it's more than just “DevOps tooling.” Yes, some of it is about automation and what automation tools bring to the table, but it's what teams do with automation to quickly create more great releases with fewer and shorter incidents. Choosing a one-size-fits-all solution won't work with a true DevOps approach. For this, I'll briefly go into CALMS (but you can read more in my previous post) and the four key metrics to measure a team's performance. From there, we'll look at choosing the right tooling.

 

CALMS

source: atlassian.com, the CALMS framework for DevOps

Image: Atlassian, https://www.atlassian.com/devops

Let's quickly reiterate CALMS:

  • Culture
  • Automation
  • Lean
  • Measurements
  • Sharing

 

These five core values are an integral part of high-performing teams. Successful teams tend to focus on these areas to improve their performance. What makes teams successful in the context of DevOps tooling, you ask? I'll explain.

 

Key Performance Metrics for a Successful DevOps Approach

 

Measuring a team's performance can be hard. You want to measure the metrics they can directly influence, but avoid being overly vague in measuring success. For instance, measuring the customer NPS involves much more than a single team's efforts, so that one team's efforts can get lost in translation. A good methodology of measuring DevOps performance comes from DevOps Research and Assessment, a company that publishes a yearly report on the state of DevOps, “Accelerate: State of DevOps 2018: Strategies for a New Economy.” They recommend using these four performance metrics:

Source: DORA Accelerate State of DevOps 2018

 

  • Deployment frequency: how often does the team deploy code?
  • Lead time for changes: how long does it take to go from code commit to code successfully running in production?
  • Time to restore service: how long does it take to restore service after an incident (like an unplanned outage or security incident)?
  • Change failure rate: what percentage of changes results in an outage?

Image: 2018 State of DevOps Report, https://cloudplatformonline.com/2018-state-of-devops.html

 

These metrics measure the output, like changes to software or infrastructure, as well as the quality of the output. It's vague enough to be broadly applicable, but concrete enough to be of value for many IT teams. Also, these metrics clearly embrace the core values from the CALMS framework. Without good post-mortems (or sprint reviews), how do you bring down the change failure rate or time to restore service? Without automation, how do you increase deployment frequency?

 

Choosing the right support infrastructure for your automation efforts is key to increasing performance, though, and a one-size-fits-all solution will almost certainly be counter-productive.

 

Why The Right Tool Is Vital

Each team is unique. Team members each have their own skills. The product or service they work on is built around a specific technology stack. The maturity of the team and the tech is different for everyone. The circumstances in which they operate their product or service and their dependencies are incomparable.

 

So what part of that mix makes a one-size-fits-all solution fit? You guessed it: none.

 

Add in the fact that successful teams tend to be nimble and quick to react to changing circumstances, and you'll likely conclude that most “big” enterprise solutions are incompatible with the DevOps approach. Every problem needs a specific solution.

 

I'm not saying you need to create a complex and unmanageable toolchain, which would be the other extreme. I’m saying there's a tendency for companies to buy in to big enterprise solutions because it looks good on paper, it’s an easier sell (as opposed to buying dozens of smaller tools), and nobody ever got fired for buying $insert-big-vendor-name-here.

 

And I'm here to tell you that you need to resist that tendency. Build out your toolchain the way you and your team sees fit. Make sure it does exactly what you need it to do, and make sure it doesn't do anything you don't need. Simplify the toolchain.

 

Use free and open-source components that are easier to swap out, so you can change when needed without creating technical debt or being limited by the big solution that won't let you use the software as you want it (a major upside of “libre” software, which many open-source is: you’re free to use it in a way that you intend, not in just the way the original creator intended).

 

Next Up

So there you have it. Build your automation toolchain, infrastructure, software service, or product using the tools you need, and nothing more. Make components easy to swap out when circumstances change. Don't buy into the temptation that any vendor can be your one-size-fits-all solution. Put in the work to create your own chain of tools that work for you.

 

In the next post in this series, I'll dive into an overview and categorization of a DevOps toolchain, so you'll know what to look out for, what tools solve what problems, and more. We'll use the Periodic Table of DevOps tools, look at value streams to identify which tools you need, and look at how to choose for different technology stacks, ecosystems, and popularity to solve specific problems in the value stream.

Welcome to the first in a five-part series focusing on information security in a hybrid IT world. Because I’ve spent the vast majority of my IT career as a contractor for the U.S. Department of Defense, I view information security through the lens that protecting national security and keeping lives safe is the priority. The effort and manageability challenges of the security measures are secondary concerns.

 

 

Photograph of the word "trust" written in the sand with a red x on top.

Modified from image by Lisa Caroselli from Pixabay.

 

About Zero Trust

In this first post, we’ll explore the Zero Trust model. Odds are you’ve heard the term “Zero Trust” multiple times in the nine years since Forrester Research’s John Kindervag created the model. In more recent years, Google and Gartner followed suit with their own Zero Trust-inspired models: BeyondCorp and LeanTrust, respectively.

 

“Allow, allow, allow,” Windows Guy must authorize each request. “It’s a security feature of Windows Vista,” he explains to Justin Long, the much cooler Mac Guy. In this TV commercial, Windows Guy trusts nothing, and each request requires authentication (from himself) and authorization.

 

The Zero Trust model kind of works like this. By default, nothing is trusted or privileged. Internal requests don’t get preference over external requests. Additionally, some other methods help enforce that Zero Trust model: least-privilege authentication, some strict access right controls, using intelligent analytics for greater insight and logging purposes, and additional security controls are the Zero Trust model in action.

 

If you think Zero Trust sounds like “Defense-in-Depth,” you are correct. Defense-in-Depth will be covered in a later blog post. As you know, the best security controls are always layered.

 

Why Isn’t Trust but Verify Enough?

Traditional perimeter firewalls, the gold standard for “trust but verify,” leave a significant vulnerability in the form of internal, trusted traffic. Perimeter firewalls focus on keeping the network free of that untrusted (and not authorized) external traffic. This type of traffic is usually referred to as “North-South” or “Client-Server.” Another kind of traffic exists, though: “East-West” or “Application-Application” traffic that probably won’t hit a perimeter firewall because it doesn’t leave the data center.

 

Most importantly, perimeter firewalls don’t apply to hybrid cloud, a term for that space where private and public network coalesce, or public cloud traffic. Additionally, while the cloud simplifies some things like building scalable, resilient applications, it adds complexity in other areas like network, troubleshooting, and securing one of your greatest assets: data. Cloud also introduces new traffic patterns and infrastructure you share with others but don’t control. Hybrid cloud blurs the trusted and untrusted lines even further. Applying the Zero Trust model allows you to begin to mitigate some of the risks from untrusted public traffic.

 

Who Uses Zero Trust?

In any layered approach to security, most organizations are probably already applying some of Zero Trust principles like multi-factor authentication, least-privilege, and strict ACLs, even if they haven’t reached the stage of requiring authentication and authorization for all requests from processes, users, devices, applications, and network traffic.

 

Also, the CIO Council, “the principal interagency forum to improve [U.S. Government] agency practices for the management of information technology,” has a Zero Trust pilot slated to begin in summer 2019. The National Institute of Standards and Technology, Department of Justice, Defense Information Systems Agency, GSA, OMB, and several other agencies make up this government IT security council.

 

How Can You Apply Principles From the Zero Trust Model?

 

  • Whitelists. A list of who to trust. It can specifically apply to processes, users, devices, applications, or network traffic that are granted access. Anything not on the list is denied. The opposite of this is a blacklist, where you need to know the specific threats to deny, and everything else gets through.

  • Least privilege. The principle in which you assign the minimum rights to the minimum number of accounts to accomplish the task. Other parts include separation of user and privileged accounts with the ability to audit actions.

  • Security automation for monitoring and detection. Intrusion prevention systems that stop suspect traffic or processes with manual intervention.

  • Identity management. Harden the authentication process with a one-time password or implement multi-factor authentication (requires proof from at least two of the following categories: something you know, something you have, and something you are).

  • Micro-segmentation. Network security access control that allows you to protect groups of applications and workloads and minimize any damage in case of a breach or compromise. Micro-segmentation also can apply security to East-West traffic.

  • Security defined perimeter. Micro-segmentation, designed for a cloud world, in which assets or endpoints are obscured in a “black cloud” unless you “need to know (or see)” the assets or group of assets.

 

Conclusion

Implementing any security measure takes work and effort to keep the bad guys out while letting the good guys in and, most importantly, keeping valuable data safe.

 

However, security breaches and ransomware attacks increase every year. As more devices come online, perimeters dissolve, and the amount of sensitive data stored online grows more extensive, the pool of malicious actors and would-be hackers increases.

 

It’s a scary world, one in which you should consider applying “Zero Trust.”

This week's Actuator comes to you direct from the Fire Circle in my backyard because (1) I am home for a few weeks and (2) it finally stopped raining. The month of May has been filled with events for me these past nine years, but not this year. So of course, the skies have been gray for weeks. We are at 130% rainfall year to date, and only one inch of rainfall between now and September 30th will set a new record.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

WhatsApp Finds and Fixes Targeted Attack Bug

I’m shocked, just shocked, to find out WhatsApp and Facebook may have intentionally been spying on their users.

 

Microsoft Reveals First Hardware Using Its New Compression Algorithm

And then they open sourced the technology, making it available for anyone to use, including AWS. More evidence that this is the new Microsoft.

 

Strong Opinions Loosely Held Might be the Worst Idea in Tech

Toxic Certainty Syndrome is a real problem, made worse by the internet. I’m not sure the proposed solution of offering percentages is the right choice for everyone, but I’m 100% certain we need to do something.

 

Amazon rolls out machines that pack orders and replace jobs

Amazon gets subsidies with the promise of creating jobs, then deploys robots to remove those same jobs.

 

San Francisco banned facial recognition tech. Here’s why other cities should too.

I’m with San Francisco on this, mostly due to the inherent bias found in the technology at the current time.

 

Gmail logs your purchase history, undermining Google’s commitment to privacy

Don’t be evil, unless you can get away with it for decades.

 

Selfie Deaths Are an Epidemic

Something, something, Darwin.

 

Thankful to have the opportunity to walk around Seattle after the SWUG two weeks ago:

As the public cloud continues to grow in popularity, it’s started to penetrate our private data centers and realize hybrid IT. More companies are adopting a hybrid IT model, and I keep hearing that we need to forget everything we know about infrastructure and start over when it comes to the public cloud. It's very difficult for me to imagine how to do this. I've spent the last fifteen years understanding infrastructure, troubleshooting infrastructure, and managing infrastructure. I've spent a lot of time perfecting my craft. I don't want to just throw it away. Instead, I’d like to think experienced systems administrators can bring their knowledge and build on their experience to bring value to a hybrid IT model. I want to explore a few areas where on-premises system administrators can use what they know today, build on that knowledge, and apply it to hybrid IT.

 

Monitoring

 

Monitoring is a critical component of a solid, functional data center. It's a function to inform us when critical services are down. It helps create baselines, so we know what to measure against and how to improve applications and services. Monitoring is so important that there are entire facilities, called Network Operations Centers (NOC), dedicated to this single function. Operations staff who know how to properly configure monitoring systems and hone in on not just critical services, but also the entire environment the application requires, provide value.

 

As we begin to shift workloads to the public cloud, we need to continue monitoring the entire stack on which our application lives. We'll need to start expanding our toolset to monitor these workloads in the cloud; trade in the ability to monitor an application service for being able to monitor an API. All public cloud providers built their services on top of APIs. Start becoming familiar with how to interact with an API. Change the way you think about up-and-down monitors. Monitor if the instance in the cloud is sized correctly because you're paying for both the size and the time that instance is running. We know what a good monitoring configuration looks like. Now we need to expand it to include the public cloud.

 

Networking

 

One of the biggest things to be aware of when it comes to networking and connecting a private data center with a public cloud provider is knowing there are additional networking fees. The cloud providers want businesses to move as much of their data as possible to the public cloud. As an incentive, they provide free inbound traffic transfers. To move your data out or across different regions, be aware that there are additional fees. Cloud providers have different regions all across the world and, depending on from where your data is out-bounding from, the public cloud migration costs may change. Additional charges may also be incurred from other services such as deploying an appliance or using a public IP address. These are technical skills upon which to build, and they are changing the way we think about networking when we apply them to hybrid IT.

 

Compute

 

As a virtualization administrator, you're very familiar with managing the hypervisor, templates, and images. These images are the base operating environment in which your applications run. We've spent lots of time tweaking and tuning these images to make our applications run as efficiently as possible. Once our images are in production, we have to solve how to scale for load and how to maintain a solid environment without affecting production. This ranges from rolling out patches to upgrading software.

 

As we move further into a hybrid IT model and begin to use the cloud providers’ tools, image management becomes a little easier. Most of the public cloud providers offer managed autoscaling groups. This is where resources will spin up or down automatically without you having to intervene based off a metric like CPU utilization. Some providers offer multiple upgrade rollout strategies to the autoscaling groups. These range from a simple canary rollout to upgrading the entire group at once. These new tools help scale our application demand automatically and have a simpler software rollout strategy.

 

Final Thoughts

 

I don't like the concept of having to throw away years of experience to learn this new model. Yes, the cloud abstracts a lot of underlying hardware and virtualization, but traditional infrastructure skillsets and experiences can still be applied. We will always need to monitor our applications to know how they work and interact with other services in the environment. We need to understand the differences in the cloud. Don't take for granted what we did in the private data center would be a free service in the public cloud. Understand that the public cloud is a business and while some of the services are free, most are not. Besides new network equipment costs or ISP costs, traditional infrastructure didn't account for the cost of moving data around inside the data center. I believe we can use our traditional infrastructure experiences, apply new knowledge to understand some of the differences, and build new skills towards the public cloud to have a successful hybrid IT environment.

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article on some of the security concerns that come along with containers. I was surprised to hear that a third of our customers are planning to implement containers in the next 12 months. The rate of change in IT never seems to slow down.

 

Open-source containers, which isolate applications from the host system, appear to be gaining traction with IT professionals in the U.S. defense community. Security remains a notable concern for a couple of reasons.

 

First, containers are fairly new, and many administrators aren’t completely familiar with them. It’s difficult to secure something you don’t understand. Second, containers are designed in a way that hampers visibility. This lack of visibility can make securing containers taxing.

 

Layers Upon Layers

 

Containers are comprised of a number of technical abstraction layers necessary for auto-scaling and developing distributed applications. They allow developers to scale application development up or down as necessary. Visibility becomes particularly problematic when using an orchestration tool like Docker Swarm or Kubernetes to manage connections between different containers, because it can be difficult to tell what is happening.

 

Containers can also house different types of applications, from microservices to service-oriented applications. Some of these may contain vulnerabilities, but that can be impossible to know without proper insight into what is actually going on within the container.

 

Protecting From the Outside In

 

Network monitoring solutions are ideal for network security geared toward identifying software vulnerabilities and detecting and mitigating phishing attacks, but they are insufficient for container monitoring. Containers require a form of software development life-cycle monitoring on steroids, and we aren’t quite there yet.

 

Security needs to start outside the container to prevent bad stuff from getting inside. There are a few ways to do this.

 

Scan for Vulnerabilities

 

The most important thing administrators can do to secure their containers is scan for vulnerabilities in their applications. Fortunately, this can be done with network and application monitoring tools. For example, server and application monitoring solutions can be used as security blankets to ensure applications developed within containers are free of defects prior to deployment.

 

Properly Train Employees

 

Agencies can also ensure their employees are properly trained and that they have created and implemented appropriate security policies. Developers working with containers need to be acutely aware of their agencies’ security policies. They need to understand those policies and take necessary precautions to adhere to and enforce them.

 

Containers also require security and accreditation teams to examine security in new ways. Government IT security solutions are commonly viewed from a physical, network, or operating system level; the components of software applications are seldom considered, especially in government off-the-shelf products. Today, agencies should train these teams to be aware of approved or unapproved versions of components inside an application.

 

Get CIOs on Board

 

Education and enforcement must start at the top, and leadership must be involved to ensure their organizations’ policies and strategies are aligned. This will prove to be especially critical as containers become more mainstream and adoption continues to rise. It will be necessary to develop and implement new standards and policies for adoption.

 

Open-source containers come with just as many questions as they do benefits. Those benefits are real, but so are the security concerns. Agencies that can address those concerns today will be able to arm themselves with a development platform that will serve them well, now and in the future.

 

Find the full article on American Security Today.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Today’s public cloud hyperscalers, such as Microsoft Azure, AWS, and Google, provide a whole host of platforms and services to enable organizations to deliver pretty much any workload you can imagine. However, they aren’t the be-all and end-all of an organization’s IT infrastructure needs.

 

Not too long ago, the hype in the marketplace was very much geared toward moving all workloads to the public cloud. If you didn’t, you were behind the curve. The reality is, though, it’s just not practical to move all existing infrastructure to the cloud. Simply taking workloads running on-premises and running them in the public cloud is considered by many to be the wrong way to do it. This is referred to as a “lift and shift.” That’s not to say that’s the case for all workloads. Things like file servers, domain controllers, line of business application servers, and so on tend to cost more to run as native virtual machines in the public cloud and introduce extra complexity with application access and data gravity.

 

The “Cloud-First” mentality adopted by many organizations is disappearing and gradually being replaced with “Cloud-Appropriate.” I’ve found a lot of the “Cloud-First” messaging has been pushed from the board level without any real consideration or understanding for what it means to the organization other than the promise of cost savings. Over time, the pioneers who adopted public cloud first have gained the knowledge and wisdom about what operating in a “Cloud-First” environment looks like. The operating costs don’t always work out as expected—and can even be more expensive.

 

Let’s look at some examples of what “Cloud-Appropriate” may mean to you. I’m sure you’ve heard of Office 365, which offers an alternative solution to on-premises workloads such as email servers and SharePoint servers, and offers additional value with tools like workplace collaboration via Microsoft Teams, task automation with Microsoft Flow, and so on. This Software as a Service (SaaS) solution, born in the public cloud, can take full advantage of the infrastructure that underpins it. As an organization, the cost of managing the traditional infrastructure for those services disappears. You’re left with a largely predictable bill and arguably superior service offering by just monitoring Office 365.

 

Application stack refactoring is another great place to think about “Cloud-Appropriate.” You can take advantage of the services available in the public cloud, such as highly performant database solutions like Amazon RDS or the ability to take advantage of public cloud’s elasticity to easily create more workloads in a short amount of time.

 

So where does that leave us? A hybrid approach to IT infrastructure. Public cloud is certainly a revolution, but for many organizations, the evolution of their existing IT infrastructure will better serve their needs. Hyper converged infrastructure is a fitting example of the evolution of a traditional three-tier architecture comprising of networking, compute, and storage. The services offered are the same, but the footprint in terms of space, cooling, and power consumption is lower while offering greater levels of performance, which ultimately offers better value to the business.

 

 

Further Reading

CRN and IDC: Why early public cloud adopters are leaving the public cloud amidst security and cost concerns. https://www.crn.com/businesses-moving-from-public-cloud-due-to-security-says-idc-survey

All too often, companies put the wrong people on projects, and all too often, the wrong people are involved with the project. We see projects where the people making key decisions lack a basic understanding of the technology involved. I’ve been involved with hundreds of projects in which key decisions for the technology portions are made by well-meaning people who have no understanding of what they are trying to approve.

 

For example, back in 2001 or 2002, a business manager read that XML was the new thing to use to build applications. He decided his department's new knowledge base must be built as a single XML document so it could be searched with the XML language. Everyone in the room sat dumbfounded, and we then spent hours trying to talk him out of his crazy idea.

 

I’ve worked on numerous projects where the business decided to buy some piece of software, and the day the vendor showed up to do the install was the day we found out about the project. The hardware we were supposed to have racked and configured wasn't there; nor were the crazy uptime requirements the software was supposed to have; not to mention the software licenses required to run the solutions were never discussed with those of us in IT.

 

If the technology pros had been involved in the process from the early stages of these projects, the inherent problems could have been surfaced much earlier, and led to those issues being mitigated before the go-live date. Typically, when dealing with issues like these, everyone on the project is annoyed, and that’s no way to make forward progress. The business is mad at IT because IT didn’t deliver what the vendor needed. IT is mad at the business because they found out they needed to provide a solution too late to ensure smooth installation and service turn-up. The company is mad at the vendor because the vendor didn’t meet the deadline the vendor was supposed to meet. The vendor is mad at the client for not having the servers the business was told they needed.

 

If the business unit had simply brought the IT team into the project earlier—hopefully much earlier—a lot of these problems wouldn’t have happened. Having the right team to solve problems and move projects through the pipeline will make everything easier and successfully complete projects. That’s the entire point: to complete the project successfully. The bonus to completing the project is that people aren’t angry at each other for entirely preventable reasons.

 

Having the right people on projects from the beginning can make all the difference in the world. If people aren’t needed on a project, let them bow out; but let them decide that they don’t need to be involved with the project. Don’t decide for them. By choosing for them, you can introduce risk to the project and end up creating more work for people. After the initial project meetings, put a general notice to the people on the project letting them know they can opt out of the rest of the project if they aren’t needed, but if they’re necessary, let them stay.

 

I know in my career I’ve sat in a lot of meetings for projects, and I’d happily sit in hundreds more to avoid finding out about projects at the last minute. We can make the projects that we work on successful, but only if we work together.

 

Top 3 Considerations to Make Your Project a Success

  • Get the right people on the project as early as possible
  • Let people and departments decide if they are needed on a project; don't decide for them
  • Don't shame people that leave a project because they aren't needed on the project team

Had a great time at the Seattle SWUG last week. I always enjoy the happiness I find at SWUG events. Great conversations with customers and partners, and wonderful feedback collected to help us make better products and services. Thanks to everyone that was able to participate.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

90% of data breaches in US occur in New York and California

There is no such thing as a “leaky database.” Databases don’t leak, it’s the code that you layer on top.

 

Top 5 Configuration Mistakes That Create Field Days for Hackers

And sometimes the code is solid, but silly default configurations are your main security risk.

 

A ransomware attack is holding Baltimore's networks hostage

Maybe Baltimore thought they were safe because they weren’t in California or New York.

 

Facebook sues data collection and analytics firm Rankwave

“Hey, you can’t steal our users data and sell it, only *WE* get to do that!”

 

Microsoft recommends using a separate device for administrative tasks

I’ve lost track the number of debates I’ve had with other admins that insist on installing tools onto servers “just in case they are needed.”

 

Hackers Still Outpace Breach Detection, Containment Efforts

It takes an intruder minutes to compromise an asset, and it takes months before you will discover it happened.

 

Watch Microsoft’s failed HoloLens 2 Apollo moon landing demo

This is a wonderful demo, even if it failed at the time they tried it live.

 

Breakfast at the SWUG, just in case you needed an incentive to attend:

Starting with DevOps can be hard. Often, it's not entirely clear why you're getting on the DevOps train. Sometimes, it's simply because it's the new trendy thing to do. For some, it's to minimize the friction between the traditional IT department (“Ops”) and developers building custom applications (“Dev”). Hopefully, it will solve some practical issues you and your team may have.

 

In any case, it's worth looking at what DevOps brings to the table for your situation. In this post, I'll help you set the context of the different flavors and aspects of DevOps. “CALMS” is a good framework to use to look at DevOps.

 

CALMS

CALMS neatly summarizes the different aspects of DevOps. It stands for:

  • Culture
  • Automation
  • Lean
  • Measurement
  • Sharing

 

Note how technology and technical tooling are only one part of this mix. This might be a surprise for you, as many focus on just the technological aspects of DevOps. In reality, there's many more aspects to consider.

 

And taking this one step further: getting started with DevOps is about creating and fostering high-performance teams that imagine, develop, deploy, and operate IT systems. This is why Culture, Automation, Lean, Measurement, and Sharing are equal parts of the story.

 

Culture

Arguably, the most important part of creating highly effective teams is the aspect of shared responsibility. Many organizations choose to create multi-disciplinary teams that include specialists from Ops, Dev, and Business. Each team can take full responsibility over the full lifecycle of a part (or entire) IT system, technical domain, or part of the customer journey. The team members collaborate, experiment, and continuously improve their system. They'll take part in blameless post-mortems or sprint reviews, providing feedback and improving processes and collaboration.

 

Automation

This is the most concrete part of DevOps: tooling and automation. It's not just about automation, though. It's about knowing the flow of information through the process from development to production, also called a value stream, and automating those.

 

For infrastructure and Ops, this is also called Infrastructure-as-Code; a methodology of applying software development practices to infrastructure engineering and operational work. The key to infra-as-code is treating your infrastructure as a software project. This means maintaining and managing the state of your infrastructure in version-controlled declarative code and definitions. This code goes through a pipeline of testing and validation before the state is mirrored on production systems.

 

A good way to visualize this is the following flow chart, which can be equally applied to infrastructure engineering and software engineering.

 

The key goal of visualizing these flows is to identify waste, which in IT is manual and reactive labor. Examples are fixing bugs, mitigating production issues, supporting customer incidents, and solving technical debt. This is all a form of re-work that, in an ideal world, could be avoided. This type of work takes engineering time away from the good kind of manual work: creating new MVPs for features, automation, tests, infrastructure configuration, etc.

 

Identifying manual labor that can be simplified and automated creates an opportunity to choose the right tools to remove waste, which we'll dive into in this blog series. In upcoming posts, you'll learn how to choose the right set of DevOps monitoring tools, which isn’t an easy task by any stretch of the imagination.

 

Lean

Lean is a methodology first developed by Toyota to optimize its factories. These days, Lean can be applied to manufacturing, software development, construction, and many other disciplines. In IT, Lean is valuable to visualize and map out the value stream, a single flow of work within your organization that benefits a customer. An example would be the manufacturing of a piece of code from ideation to when it's in the hands of the customer via way of a production release. It's imperative to identify and visualize your value stream, with all its quirks, unnecessary approval gates, and process steps. With this, you'll be able to remove waste and toil from this process and create flow. These are all important aspects of creating high-performing teams. If your processes contain a lot of waste, complexity, or variation, chances are, the team won't be as successful.

 

Measurements

How do you measure performance and success? The DevOps mindset heavily leans on measuring performance and progress. While it doesn't prescribe specific metrics to use, there are a couple of common KPIs many teams go by. For IT teams, there are four crucial metrics to measure the team's performance, inside-out:

  1. Deployment Frequency: how often does the team deploy code?
  2. Lead time for changes: how long does it take to go from code commit to code successfully running in production?
  3. Time to restore service: how long does it take to restore service after an incident (like an unplanned outage or security incident)?
  4. Change failure rate: what percentage of changes results in an outage?

 

In addition, there are some telling metrics to measure success from the outside-in:

  1. Customer satisfaction rate (NPS)
  2. Employee satisfaction (happiness index)
  3. Employee productivity
  4. Profitability of the service

 

Continuously improving the way teams work and collaborate and minimizing waste, variation, and complexity will result in measurable improvements in these key metrics.

 

Sharing

To create high-performing teams, team members need to understand each other while still contributing their expertise. This creates the tension between “knowing a lot about few things” and “knowing a little about a lot of things.” This is known as the T-shaped knowledge problem. To balance between the two, high-performing teams are known to spend a decent amount of time on sharing knowledge and exchanging experiences. This can take shape in many ways, like peer review, pair programming, knowledge sessions, communities of expertise, meetups, training, and internal champions that further their field with coworkers.

 

Next Up

With this contextual overview, we've learned DevOps is much more than just technology and tooling; grasping these concepts is vital for creating high-performance teams. But choosing the right approach for automation is no joke, either. There's so much to choose from, ranging from small tools that excel at one task but are harder to integrate into your toolchain to “OK” enterprise solutions that do many tasks but come pre-integrated. In the next post in this getting started with DevOps series, we'll look at why choosing a one-size-fits-all solution won't work. 

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s another interesting article from my colleague Sascha Giese on how improved communications and training can help organizations keep their infrastructure updated. Training is one of those things that’s always a priority but rarely makes it to the top of the list.

 

Government technology professionals dedicate much of their time to optimizing their IT infrastructures. So, when new policies or cultural issues arise, it can be challenging to integrate these efficiently within the existing landscape.

 

The SolarWinds IT Trends Report 2018 revealed that, in the U.K. public sector, this challenge is yet to be resolved—43% of those surveyed cited inadequate organizational strategies as the reason for the lack of optimization, followed closely by 42% who selected insufficient training investment. Let’s explore these topics further.

 

Communication Should Never Be a One-off

 

Organizational IT strategy may start at the top, but often it can get lost in translation or diluted as it’s passed down through the ranks—if it gets passed down at all. As such, IT managers might be doing their daily jobs, but they may not be working with an eye towards their agencies’ primary objectives.

 

One example of this is the use of public cloud, which—despite the Cloud First policy being introduced in 2013—is still not being realized across the U.K. government to its full potential, with less than two-thirds (61%) of central government departments having adopted any public cloud so far.

 

Agency IT leaders should consider implementing systematic methods for communicating and disseminating information, ensuring that everyone understands the challenges and opportunities and can work toward strategic goals. Messages could also then be reinforced on an ongoing basis. The key is to make sure that the U.K. government IT strategy remains top-of-mind for everyone involved and is clearly and constantly articulated from the top down.

 

Training Should Be a Team Priority

 

The IT Professionals Day 2018 survey by SolarWinds found that, globally, 44% of public sector respondents would develop their IT skillset if they had an extra hour in their workday. Travel to seminars and class tuition fees cost money that agencies may not have.

 

Training can have a remarkably positive impact on efficiency. In addition to easing the introduction of new technologies, well-trained employees know how to better respond in the case of a crisis, such as a network outage or security breach. Their expertise can save precious time and be an effective safeguard against intruders and disruption, which can be invaluable in delivering better services to the public.

 

Self-training can be just as important as agency-driven programs. It may be beneficial in the long run for technology professionals to hold themselves accountable for learning about agency objectives and how tools can help them meet those goals, supported with an allocated portion of time that professionals can use for this purpose. People don’t necessarily learn through osmosis, but through action, and at different levels.

 

For this and other education initiatives, technology professionals should use the educational allowances allocated to them by their organizations, which can sometimes run into thousands of dollars. Take the time to learn about the technologies they already have in-house, but also examine other solutions and tools that will help their departments become fully optimized, especially when these may form part of a broader public sector IT strategy.

 

Though surveys like the IT Trends Report have highlighted the existence of a knowledge and information-sharing gap, implementing stronger communication and training initiatives into government organizations could help reduce this. And by producing better-optimized environments for IT teams, the quality of the service that their departments can deliver to the wider public is increased, bringing about better changes for all.

 

Find the full article on Open Access Government.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Hello THWACKers long time no chat! Welcome to part one in a five-part series on machine learning and artificial intelligence. I figured what better place to start than in the highly contested world of ethics? You can stop reading now because we’re talking about ethics, and that’s the last thing that anyone ever wants to talk about. But before you go, know this isn’t your standard Governance, Risk, and Compliance (GRC) talk where everything is driven by and modeled by a policy that can be easily policed, defined, dictated, and followed. Why isn’t it? Because if that were true, we wouldn’t have a need for any discussion on the topic of ethics and it would merely be a discussion of policy—and who doesn’t love policy?

 

Let me start by asking you an often overlooked but important question. Does data have ethics? On its own, the simple answer is no. As an example, we have Credit Reporting Agencies (CRAs) who collect our information, like names, birthdays, payment history, and other obscure pieces of information. Independently, that information is data, which doesn’t hold, construe, or leverage ethics in any way. If I had a database loaded with all this information, it would be a largely boring dataset, at least on the surface.

 

Now let’s take the information the CRAs have, and I go to get a loan to buy a house, get car insurance, or rent an apartment. If I pass the credit check and I get the loan, the data is great. Everybody wins. But, if I’m ranked low in their scoring system and I don’t get to rent an apartment, for example, the data is bad and unethical. OK, on the surface, the information may not be unethical per se, but it can be used unethically. Sometimes (read: often) a person's credit, name, age, gender, or ethnicity will be calculated in models to label them as “more creditworthy” or “less creditworthy” in getting loans, mortgages, rent, and so on and so forth.

 

That doesn’t mean the data or the information in the table or model is ethical or unethical, but certainly claims can be made that biases (often human biases) have influenced how that information has been used.

 

This is a deep subject—how can we make sure our information can’t be used inappropriately or for evil? You’re in luck. I have a simple answer to that question: You can’t. I tried this once. I used to sell Ginsu knives and I never had to worry about them being used for evil because I put a handy disclaimer on it. Problem solved.

 

Disclaimer

 

Seems like a straightforward plan, right? That’s what happens when policy, governance, and other aspects of GRC enter into the relationship of “data.” “We can label things so people can’t use them for harm.” Well, we can label them all we want, but unless we enact censorship, we can’t STOP people from using them unethically.

 

So, what do we do about it? The hard, fast, and easy solution for anyone new to machine learning or wanting to work with artificial intelligence is: use your powers for good and not evil. I use my powers for good, but I know that a rock can be used to break a window or hurt someone (evil), but it also can be used to build roads and buildings (good). We’re not going to ban all rocks because they could possibly be used wrongly, just as we’re not going to ban everyone’s names, birthdays, and payment history because they could be misused.

 

We have to make a concerted effort to realize the impacts of our actions and find ways to better the world around us through them. There’s still so much more on this topic to even discuss, but approaching it with an open mind and realizing there is so much good we can do in the world will leave you feeling a lot happier than looking at the darkness of and worry surrounding things you cannot control.

 

Was this too deep? Probably too deep a subject for the first in this series, but it was timely and poignant to a Lightning Talk I was forced (yes, I said forced) to give on machine learning and ethics at the recent ML4ALL Machine Learning Conference.

 

ML4ALL Lightning Talk on Ethics

 

https://youtu.be/WPZd2dz5nfc?t=17238

 

Feel free to enjoy the talk here, and if you found this useful, terrifying, or awkward, let’s talk about it. I find ethics a difficult topic to discuss, mainly because people want to enforce policy on things they cannot control, especially when the bulk of the information is “public.” But the depth of classifying and changing the classification of data is best saved for another day.

In Seattle this week for the Seattle SWUG. If you're in the room reading this, then you aren’t paying attention to the presentation. So maybe during the break you should find me, say hello, and we can talk data or bacon.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

The productivity pit: how Slack is ruining work

Making me feel better about my decision to quit Slack last year.

 

Dead Facebook users could outnumber living ones within 50 years

Setting aside the idiocy of thinking Facebook will still be around in 50 years, the issue with removing deceased users from platforms such as Facebook or LinkedIn is real and not easily solved.

 

Hackers went undetected in Citrix’s internal network for six months

For anyone believing they are on top of securing data, hackers went undetected in Citrix’s internal network for six months. Six. Months.

 

Dutch central bank tested blockchain for 3 years. The results? ‘Not that positive’

One of the more realistic articles about blockchain, a company that admits it's trying, not having smashing success, and willing to keep researching. A refreshing piece when compared to the marketing fluff about blockchain curing polio.

 

Docker Hub Breach: It's Not the Numbers; It's the Reach

Thanks to advances in automation, data breaches in a place like Docker can end up resulting in breaches elsewhere. Maybe it’s time we rethink authentication. Or perhaps we rethink who we trust with our code.

 

Los Angeles 2028 Olympics budget hits $6.9B

Imagine if our society was able to privately fund $6.9B towards something like poverty, homelessness, or education instead of arranging extravagant events that cost $1,700 a ticket to attend in person.

 

A Not So Fond Look Back At Action Park, America's Scariest Amusement Park

Is it weird that after watching this video it makes me want to go to this park even more?

 

I like it when restaurants post their menu outside:

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article from my colleague Sascha Giese on strategies for digital transformation in the public sector. Our government customers here in the states have similar challenges and should benefit from the discussion.

 

For an organization like the NHS, digital transformation can present challenges, but the need for faster service delivery, cost management efficiency, and improvements to patient care make the adoption of technology a strategic priority.

 

Digital transformation refers to a business restructuring its systems and infrastructure to avoid a potential tipping point caused by older technologies and downward market influences. This transformation can also be disrupting, as it affects nearly every aspect of the organization.

 

For an organization like the U.K. NHS, this can present more challenges than for private-sector businesses.

 

Outdated infrastructure often struggles to keep up with the amount and type of data being produced, and with the volume of data the NHS processes now being supplemented by data coming in from private healthcare providers as well, the technology deployed could fall further behind. There are also growing concerns regarding management and security of this data.

 

Because of this, the NHS is in the perfect position to benefit from implementing a digital transformation strategy. No matter how small, starting now could help keep doctors away from paperwork and closer to their patients, which, at the end of the day, is what really matters.

 

For the NHS to reap the benefits of digital transformation, it’s important for IT decision makers to consider emerging technologies, such as cloud, artificial intelligence (AI), and predictive analytics.

 

Without the knowledge of how and why digital transformation can benefit the NHS, it is understandable that a recent survey from SolarWinds, conducted by iGov, found that nearly one in five NHS trusts surveyed have no digital transformation strategy, and a further 24% have only just started one.

 

Being aware is the first hurdle to overcome, and the NHS is already on its way to conquering it.

 

Getting to grips with new technology is always going to be a challenge, and even more so for those handling some of the U.K.’s most-critical data—that of our health and wellbeing—so acknowledging that legacy technology is holding the NHS back means they’re best placed to start implementing these changes.

 

Next, IT leaders should consider implementing a transformation strategy that supports these goals. Enlisting the right people from within the organization with expertise that can guide the process and implementing the best tools can help enable visibility and management throughout the whole process. Some methods to think about executing first include:

 

  • Simplifying current IT: Complexity often leads to mistakes, longer processes, and increased costs across the board.
  • Keeping IT flexible: Hybrid environments are the norm for many agencies. NHS trusts should consider technology that enables the use of private, public, or hybrid cloud, where data, workloads, and applications can be moved from one platform to another with a simple click.
  • Maintaining IT resilience: Trusts that need to run 24/7 should use systems that ensure both data availability and data protection.
  • Creating a transformational culture: Changing the culture starts at the top; if trust leaders are unwilling to consider change, it’s likely that their subordinates are also resistant.

 

With the right preparation and tools in place, the journey to digital transformation can be a positive experience for improving NHS IT solutions and can yield impressive results.

 

The healthcare industry can benefit greatly from implementing transformation strategies, so the sooner these can be integrated, the quicker we can see improvements across the board.

 

Find the full article on Building Better Healthcare.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Happy May Day! We are one-third of the way through the calendar year. Now is a good time to check on your goals for 2019 and adjust your plans as needed.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Departing a US Airport? Your Face Will Be Scanned

My last two trips out of DTW have used this technology. I had initial privacy concerns, but the tech is deployed by Border Patrol, and your data is not shared with the airline. In other words, the onus of passport control at the gate is being removed from the airlines and put into the hands of the people that should be doing the checking.

 

Password "123456" Used by 23.2 Million Users Worldwide

This is why we can’t have nice things.

 

Hacker Finds He Can Remotely Kill Car Engines After Breaking Into GPS Tracking Apps

“…he realized that all customers are given a default password of 123456 when they sign up.”

 

Some internet outages predicted for the coming month as '768k Day' approaches

The outage in 2014 was our wake-up call. If your hardware is old, and you haven’t made the necessary configuration changes, then you deserve what's coming your way.

 

Password1, Password2, Password3 no more: Microsoft drops password expiration rec

Finally, some good news about passwords and security. Microsoft will no longer force users to reset passwords after a certain amount of time.

 

Ethereum bandit makes off with $6.1M after bypassing weak private keys

Weak passwords are deployed by #blockchain developers, further eroding my confidence in this technology and the people building these systems.

 

Many Used Hard Drives Sold on eBay Still Contain Leftover Data

Good reminder to destroy your old hard drives and equipment.

 

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.