1 2 3 Previous Next

Geek Speak

2,738 posts

Welcome to the first in a five-part series focusing on information security in a hybrid IT world. Because I’ve spent the vast majority of my IT career as a contractor for the U.S. Department of Defense, I view information security through the lens that protecting national security and keeping lives safe is the priority. The effort and manageability challenges of the security measures are secondary concerns.

 

 

Photograph of the word "trust" written in the sand with a red x on top.

Modified from image by Lisa Caroselli from Pixabay.

 

About Zero Trust

In this first post, we’ll explore the Zero Trust model. Odds are you’ve heard the term “Zero Trust” multiple times in the nine years since Forrester Research’s John Kindervag created the model. In more recent years, Google and Gartner followed suit with their own Zero Trust-inspired models: BeyondCorp and LeanTrust, respectively.

 

“Allow, allow, allow,” Windows Guy must authorize each request. “It’s a security feature of Windows Vista,” he explains to Justin Long, the much cooler Mac Guy. In this TV commercial, Windows Guy trusts nothing, and each request requires authentication (from himself) and authorization.

 

The Zero Trust model kind of works like this. By default, nothing is trusted or privileged. Internal requests don’t get preference over external requests. Additionally, some other methods help enforce that Zero Trust model: least-privilege authentication, some strict access right controls, using intelligent analytics for greater insight and logging purposes, and additional security controls are the Zero Trust model in action.

 

If you think Zero Trust sounds like “Defense-in-Depth,” you are correct. Defense-in-Depth will be covered in a later blog post. As you know, the best security controls are always layered.

 

Why Isn’t Trust but Verify Enough?

Traditional perimeter firewalls, the gold standard for “trust but verify,” leave a significant vulnerability in the form of internal, trusted traffic. Perimeter firewalls focus on keeping the network free of that untrusted (and not authorized) external traffic. This type of traffic is usually referred to as “North-South” or “Client-Server.” Another kind of traffic exists, though: “East-West” or “Application-Application” traffic that probably won’t hit a perimeter firewall because it doesn’t leave the data center.

 

Most importantly, perimeter firewalls don’t apply to hybrid cloud, a term for that space where private and public network coalesce, or public cloud traffic. Additionally, while the cloud simplifies some things like building scalable, resilient applications, it adds complexity in other areas like network, troubleshooting, and securing one of your greatest assets: data. Cloud also introduces new traffic patterns and infrastructure you share with others but don’t control. Hybrid cloud blurs the trusted and untrusted lines even further. Applying the Zero Trust model allows you to begin to mitigate some of the risks from untrusted public traffic.

 

Who Uses Zero Trust?

In any layered approach to security, most organizations are probably already applying some of Zero Trust principles like multi-factor authentication, least-privilege, and strict ACLs, even if they haven’t reached the stage of requiring authentication and authorization for all requests from processes, users, devices, applications, and network traffic.

 

Also, the CIO Council, “the principal interagency forum to improve [U.S. Government] agency practices for the management of information technology,” has a Zero Trust pilot slated to begin in summer 2019. The National Institute of Standards and Technology, Department of Justice, Defense Information Systems Agency, GSA, OMB, and several other agencies make up this government IT security council.

 

How Can You Apply Principles From the Zero Trust Model?

 

  • Whitelists. A list of who to trust. It can specifically apply to processes, users, devices, applications, or network traffic that are granted access. Anything not on the list is denied. The opposite of this is a blacklist, where you need to know the specific threats to deny, and everything else gets through.

  • Least privilege. The principle in which you assign the minimum rights to the minimum number of accounts to accomplish the task. Other parts include separation of user and privileged accounts with the ability to audit actions.

  • Security automation for monitoring and detection. Intrusion prevention systems that stop suspect traffic or processes with manual intervention.

  • Identity management. Harden the authentication process with a one-time password or implement multi-factor authentication (requires proof from at least two of the following categories: something you know, something you have, and something you are).

  • Micro-segmentation. Network security access control that allows you to protect groups of applications and workloads and minimize any damage in case of a breach or compromise. Micro-segmentation also can apply security to East-West traffic.

  • Security defined perimeter. Micro-segmentation, designed for a cloud world, in which assets or endpoints are obscured in a “black cloud” unless you “need to know (or see)” the assets or group of assets.

 

Conclusion

Implementing any security measure takes work and effort to keep the bad guys out while letting the good guys in and, most importantly, keeping valuable data safe.

 

However, security breaches and ransomware attacks increase every year. As more devices come online, perimeters dissolve, and the amount of sensitive data stored online grows more extensive, the pool of malicious actors and would-be hackers increases.

 

It’s a scary world, one in which you should consider applying “Zero Trust.”

This week's Actuator comes to you direct from the Fire Circle in my backyard because (1) I am home for a few weeks and (2) it finally stopped raining. The month of May has been filled with events for me these past nine years, but not this year. So of course, the skies have been gray for weeks. We are at 130% rainfall year to date, and only one inch of rainfall between now and September 30th will set a new record.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

WhatsApp Finds and Fixes Targeted Attack Bug

I’m shocked, just shocked, to find out WhatsApp and Facebook may have intentionally been spying on their users.

 

Microsoft Reveals First Hardware Using Its New Compression Algorithm

And then they open sourced the technology, making it available for anyone to use, including AWS. More evidence that this is the new Microsoft.

 

Strong Opinions Loosely Held Might be the Worst Idea in Tech

Toxic Certainty Syndrome is a real problem, made worse by the internet. I’m not sure the proposed solution of offering percentages is the right choice for everyone, but I’m 100% certain we need to do something.

 

Amazon rolls out machines that pack orders and replace jobs

Amazon gets subsidies with the promise of creating jobs, then deploys robots to remove those same jobs.

 

San Francisco banned facial recognition tech. Here’s why other cities should too.

I’m with San Francisco on this, mostly due to the inherent bias found in the technology at the current time.

 

Gmail logs your purchase history, undermining Google’s commitment to privacy

Don’t be evil, unless you can get away with it for decades.

 

Selfie Deaths Are an Epidemic

Something, something, Darwin.

 

Thankful to have the opportunity to walk around Seattle after the SWUG two weeks ago:

As the public cloud continues to grow in popularity, it’s started to penetrate our private data centers and realize hybrid IT. More companies are adopting a hybrid IT model, and I keep hearing that we need to forget everything we know about infrastructure and start over when it comes to the public cloud. It's very difficult for me to imagine how to do this. I've spent the last fifteen years understanding infrastructure, troubleshooting infrastructure, and managing infrastructure. I've spent a lot of time perfecting my craft. I don't want to just throw it away. Instead, I’d like to think experienced systems administrators can bring their knowledge and build on their experience to bring value to a hybrid IT model. I want to explore a few areas where on-premises system administrators can use what they know today, build on that knowledge, and apply it to hybrid IT.

 

Monitoring

 

Monitoring is a critical component of a solid, functional data center. It's a function to inform us when critical services are down. It helps create baselines, so we know what to measure against and how to improve applications and services. Monitoring is so important that there are entire facilities, called Network Operations Centers (NOC), dedicated to this single function. Operations staff who know how to properly configure monitoring systems and hone in on not just critical services, but also the entire environment the application requires, provide value.

 

As we begin to shift workloads to the public cloud, we need to continue monitoring the entire stack on which our application lives. We'll need to start expanding our toolset to monitor these workloads in the cloud; trade in the ability to monitor an application service for being able to monitor an API. All public cloud providers built their services on top of APIs. Start becoming familiar with how to interact with an API. Change the way you think about up-and-down monitors. Monitor if the instance in the cloud is sized correctly because you're paying for both the size and the time that instance is running. We know what a good monitoring configuration looks like. Now we need to expand it to include the public cloud.

 

Networking

 

One of the biggest things to be aware of when it comes to networking and connecting a private data center with a public cloud provider is knowing there are additional networking fees. The cloud providers want businesses to move as much of their data as possible to the public cloud. As an incentive, they provide free inbound traffic transfers. To move your data out or across different regions, be aware that there are additional fees. Cloud providers have different regions all across the world and, depending on from where your data is out-bounding from, the public cloud migration costs may change. Additional charges may also be incurred from other services such as deploying an appliance or using a public IP address. These are technical skills upon which to build, and they are changing the way we think about networking when we apply them to hybrid IT.

 

Compute

 

As a virtualization administrator, you're very familiar with managing the hypervisor, templates, and images. These images are the base operating environment in which your applications run. We've spent lots of time tweaking and tuning these images to make our applications run as efficiently as possible. Once our images are in production, we have to solve how to scale for load and how to maintain a solid environment without affecting production. This ranges from rolling out patches to upgrading software.

 

As we move further into a hybrid IT model and begin to use the cloud providers’ tools, image management becomes a little easier. Most of the public cloud providers offer managed autoscaling groups. This is where resources will spin up or down automatically without you having to intervene based off a metric like CPU utilization. Some providers offer multiple upgrade rollout strategies to the autoscaling groups. These range from a simple canary rollout to upgrading the entire group at once. These new tools help scale our application demand automatically and have a simpler software rollout strategy.

 

Final Thoughts

 

I don't like the concept of having to throw away years of experience to learn this new model. Yes, the cloud abstracts a lot of underlying hardware and virtualization, but traditional infrastructure skillsets and experiences can still be applied. We will always need to monitor our applications to know how they work and interact with other services in the environment. We need to understand the differences in the cloud. Don't take for granted what we did in the private data center would be a free service in the public cloud. Understand that the public cloud is a business and while some of the services are free, most are not. Besides new network equipment costs or ISP costs, traditional infrastructure didn't account for the cost of moving data around inside the data center. I believe we can use our traditional infrastructure experiences, apply new knowledge to understand some of the differences, and build new skills towards the public cloud to have a successful hybrid IT environment.

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article on some of the security concerns that come along with containers. I was surprised to hear that a third of our customers are planning to implement containers in the next 12 months. The rate of change in IT never seems to slow down.

 

Open-source containers, which isolate applications from the host system, appear to be gaining traction with IT professionals in the U.S. defense community. Security remains a notable concern for a couple of reasons.

 

First, containers are fairly new, and many administrators aren’t completely familiar with them. It’s difficult to secure something you don’t understand. Second, containers are designed in a way that hampers visibility. This lack of visibility can make securing containers taxing.

 

Layers Upon Layers

 

Containers are comprised of a number of technical abstraction layers necessary for auto-scaling and developing distributed applications. They allow developers to scale application development up or down as necessary. Visibility becomes particularly problematic when using an orchestration tool like Docker Swarm or Kubernetes to manage connections between different containers, because it can be difficult to tell what is happening.

 

Containers can also house different types of applications, from microservices to service-oriented applications. Some of these may contain vulnerabilities, but that can be impossible to know without proper insight into what is actually going on within the container.

 

Protecting From the Outside In

 

Network monitoring solutions are ideal for network security geared toward identifying software vulnerabilities and detecting and mitigating phishing attacks, but they are insufficient for container monitoring. Containers require a form of software development life-cycle monitoring on steroids, and we aren’t quite there yet.

 

Security needs to start outside the container to prevent bad stuff from getting inside. There are a few ways to do this.

 

Scan for Vulnerabilities

 

The most important thing administrators can do to secure their containers is scan for vulnerabilities in their applications. Fortunately, this can be done with network and application monitoring tools. For example, server and application monitoring solutions can be used as security blankets to ensure applications developed within containers are free of defects prior to deployment.

 

Properly Train Employees

 

Agencies can also ensure their employees are properly trained and that they have created and implemented appropriate security policies. Developers working with containers need to be acutely aware of their agencies’ security policies. They need to understand those policies and take necessary precautions to adhere to and enforce them.

 

Containers also require security and accreditation teams to examine security in new ways. Government IT security solutions are commonly viewed from a physical, network, or operating system level; the components of software applications are seldom considered, especially in government off-the-shelf products. Today, agencies should train these teams to be aware of approved or unapproved versions of components inside an application.

 

Get CIOs on Board

 

Education and enforcement must start at the top, and leadership must be involved to ensure their organizations’ policies and strategies are aligned. This will prove to be especially critical as containers become more mainstream and adoption continues to rise. It will be necessary to develop and implement new standards and policies for adoption.

 

Open-source containers come with just as many questions as they do benefits. Those benefits are real, but so are the security concerns. Agencies that can address those concerns today will be able to arm themselves with a development platform that will serve them well, now and in the future.

 

Find the full article on American Security Today.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Today’s public cloud hyperscalers, such as Microsoft Azure, AWS, and Google, provide a whole host of platforms and services to enable organizations to deliver pretty much any workload you can imagine. However, they aren’t the be-all and end-all of an organization’s IT infrastructure needs.

 

Not too long ago, the hype in the marketplace was very much geared toward moving all workloads to the public cloud. If you didn’t, you were behind the curve. The reality is, though, it’s just not practical to move all existing infrastructure to the cloud. Simply taking workloads running on-premises and running them in the public cloud is considered by many to be the wrong way to do it. This is referred to as a “lift and shift.” That’s not to say that’s the case for all workloads. Things like file servers, domain controllers, line of business application servers, and so on tend to cost more to run as native virtual machines in the public cloud and introduce extra complexity with application access and data gravity.

 

The “Cloud-First” mentality adopted by many organizations is disappearing and gradually being replaced with “Cloud-Appropriate.” I’ve found a lot of the “Cloud-First” messaging has been pushed from the board level without any real consideration or understanding for what it means to the organization other than the promise of cost savings. Over time, the pioneers who adopted public cloud first have gained the knowledge and wisdom about what operating in a “Cloud-First” environment looks like. The operating costs don’t always work out as expected—and can even be more expensive.

 

Let’s look at some examples of what “Cloud-Appropriate” may mean to you. I’m sure you’ve heard of Office 365, which offers an alternative solution to on-premises workloads such as email servers and SharePoint servers, and offers additional value with tools like workplace collaboration via Microsoft Teams, task automation with Microsoft Flow, and so on. This Software as a Service (SaaS) solution, born in the public cloud, can take full advantage of the infrastructure that underpins it. As an organization, the cost of managing the traditional infrastructure for those services disappears. You’re left with a largely predictable bill and arguably superior service offering by just monitoring Office 365.

 

Application stack refactoring is another great place to think about “Cloud-Appropriate.” You can take advantage of the services available in the public cloud, such as highly performant database solutions like Amazon RDS or the ability to take advantage of public cloud’s elasticity to easily create more workloads in a short amount of time.

 

So where does that leave us? A hybrid approach to IT infrastructure. Public cloud is certainly a revolution, but for many organizations, the evolution of their existing IT infrastructure will better serve their needs. Hyper converged infrastructure is a fitting example of the evolution of a traditional three-tier architecture comprising of networking, compute, and storage. The services offered are the same, but the footprint in terms of space, cooling, and power consumption is lower while offering greater levels of performance, which ultimately offers better value to the business.

 

 

Further Reading

CRN and IDC: Why early public cloud adopters are leaving the public cloud amidst security and cost concerns. https://www.crn.com/businesses-moving-from-public-cloud-due-to-security-says-idc-survey

All too often, companies put the wrong people on projects, and all too often, the wrong people are involved with the project. We see projects where the people making key decisions lack a basic understanding of the technology involved. I’ve been involved with hundreds of projects in which key decisions for the technology portions are made by well-meaning people who have no understanding of what they are trying to approve.

 

For example, back in 2001 or 2002, a business manager read that XML was the new thing to use to build applications. He decided his department's new knowledge base must be built as a single XML document so it could be searched with the XML language. Everyone in the room sat dumbfounded, and we then spent hours trying to talk him out of his crazy idea.

 

I’ve worked on numerous projects where the business decided to buy some piece of software, and the day the vendor showed up to do the install was the day we found out about the project. The hardware we were supposed to have racked and configured wasn't there; nor were the crazy uptime requirements the software was supposed to have; not to mention the software licenses required to run the solutions were never discussed with those of us in IT.

 

If the technology pros had been involved in the process from the early stages of these projects, the inherent problems could have been surfaced much earlier, and led to those issues being mitigated before the go-live date. Typically, when dealing with issues like these, everyone on the project is annoyed, and that’s no way to make forward progress. The business is mad at IT because IT didn’t deliver what the vendor needed. IT is mad at the business because they found out they needed to provide a solution too late to ensure smooth installation and service turn-up. The company is mad at the vendor because the vendor didn’t meet the deadline the vendor was supposed to meet. The vendor is mad at the client for not having the servers the business was told they needed.

 

If the business unit had simply brought the IT team into the project earlier—hopefully much earlier—a lot of these problems wouldn’t have happened. Having the right team to solve problems and move projects through the pipeline will make everything easier and successfully complete projects. That’s the entire point: to complete the project successfully. The bonus to completing the project is that people aren’t angry at each other for entirely preventable reasons.

 

Having the right people on projects from the beginning can make all the difference in the world. If people aren’t needed on a project, let them bow out; but let them decide that they don’t need to be involved with the project. Don’t decide for them. By choosing for them, you can introduce risk to the project and end up creating more work for people. After the initial project meetings, put a general notice to the people on the project letting them know they can opt out of the rest of the project if they aren’t needed, but if they’re necessary, let them stay.

 

I know in my career I’ve sat in a lot of meetings for projects, and I’d happily sit in hundreds more to avoid finding out about projects at the last minute. We can make the projects that we work on successful, but only if we work together.

 

Top 3 Considerations to Make Your Project a Success

  • Get the right people on the project as early as possible
  • Let people and departments decide if they are needed on a project; don't decide for them
  • Don't shame people that leave a project because they aren't needed on the project team

Had a great time at the Seattle SWUG last week. I always enjoy the happiness I find at SWUG events. Great conversations with customers and partners, and wonderful feedback collected to help us make better products and services. Thanks to everyone that was able to participate.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

90% of data breaches in US occur in New York and California

There is no such thing as a “leaky database.” Databases don’t leak, it’s the code that you layer on top.

 

Top 5 Configuration Mistakes That Create Field Days for Hackers

And sometimes the code is solid, but silly default configurations are your main security risk.

 

A ransomware attack is holding Baltimore's networks hostage

Maybe Baltimore thought they were safe because they weren’t in California or New York.

 

Facebook sues data collection and analytics firm Rankwave

“Hey, you can’t steal our users data and sell it, only *WE* get to do that!”

 

Microsoft recommends using a separate device for administrative tasks

I’ve lost track the number of debates I’ve had with other admins that insist on installing tools onto servers “just in case they are needed.”

 

Hackers Still Outpace Breach Detection, Containment Efforts

It takes an intruder minutes to compromise an asset, and it takes months before you will discover it happened.

 

Watch Microsoft’s failed HoloLens 2 Apollo moon landing demo

This is a wonderful demo, even if it failed at the time they tried it live.

 

Breakfast at the SWUG, just in case you needed an incentive to attend:

Starting with DevOps can be hard. Often, it's not entirely clear why you're getting on the DevOps train. Sometimes, it's simply because it's the new trendy thing to do. For some, it's to minimize the friction between the traditional IT department (“Ops”) and developers building custom applications (“Dev”). Hopefully, it will solve some practical issues you and your team may have.

 

In any case, it's worth looking at what DevOps brings to the table for your situation. In this post, I'll help you set the context of the different flavors and aspects of DevOps. “CALMS” is a good framework to use to look at DevOps.

 

CALMS

CALMS neatly summarizes the different aspects of DevOps. It stands for:

  • Culture
  • Automation
  • Lean
  • Measurement
  • Sharing

 

Note how technology and technical tooling are only one part of this mix. This might be a surprise for you, as many focus on just the technological aspects of DevOps. In reality, there's many more aspects to consider.

 

And taking this one step further: getting started with DevOps is about creating and fostering high-performance teams that imagine, develop, deploy, and operate IT systems. This is why Culture, Automation, Lean, Measurement, and Sharing are equal parts of the story.

 

Culture

Arguably, the most important part of creating highly effective teams is the aspect of shared responsibility. Many organizations choose to create multi-disciplinary teams that include specialists from Ops, Dev, and Business. Each team can take full responsibility over the full lifecycle of a part (or entire) IT system, technical domain, or part of the customer journey. The team members collaborate, experiment, and continuously improve their system. They'll take part in blameless post-mortems or sprint reviews, providing feedback and improving processes and collaboration.

 

Automation

This is the most concrete part of DevOps: tooling and automation. It's not just about automation, though. It's about knowing the flow of information through the process from development to production, also called a value stream, and automating those.

 

For infrastructure and Ops, this is also called Infrastructure-as-Code; a methodology of applying software development practices to infrastructure engineering and operational work. The key to infra-as-code is treating your infrastructure as a software project. This means maintaining and managing the state of your infrastructure in version-controlled declarative code and definitions. This code goes through a pipeline of testing and validation before the state is mirrored on production systems.

 

A good way to visualize this is the following flow chart, which can be equally applied to infrastructure engineering and software engineering.

 

The key goal of visualizing these flows is to identify waste, which in IT is manual and reactive labor. Examples are fixing bugs, mitigating production issues, supporting customer incidents, and solving technical debt. This is all a form of re-work that, in an ideal world, could be avoided. This type of work takes engineering time away from the good kind of manual work: creating new MVPs for features, automation, tests, infrastructure configuration, etc.

 

Identifying manual labor that can be simplified and automated creates an opportunity to choose the right tools to remove waste, which we'll dive into in this blog series. In upcoming posts, you'll learn how to choose the right set of DevOps monitoring tools, which isn’t an easy task by any stretch of the imagination.

 

Lean

Lean is a methodology first developed by Toyota to optimize its factories. These days, Lean can be applied to manufacturing, software development, construction, and many other disciplines. In IT, Lean is valuable to visualize and map out the value stream, a single flow of work within your organization that benefits a customer. An example would be the manufacturing of a piece of code from ideation to when it's in the hands of the customer via way of a production release. It's imperative to identify and visualize your value stream, with all its quirks, unnecessary approval gates, and process steps. With this, you'll be able to remove waste and toil from this process and create flow. These are all important aspects of creating high-performing teams. If your processes contain a lot of waste, complexity, or variation, chances are, the team won't be as successful.

 

Measurements

How do you measure performance and success? The DevOps mindset heavily leans on measuring performance and progress. While it doesn't prescribe specific metrics to use, there are a couple of common KPIs many teams go by. For IT teams, there are four crucial metrics to measure the team's performance, inside-out:

  1. Deployment Frequency: how often does the team deploy code?
  2. Lead time for changes: how long does it take to go from code commit to code successfully running in production?
  3. Time to restore service: how long does it take to restore service after an incident (like an unplanned outage or security incident)?
  4. Change failure rate: what percentage of changes results in an outage?

 

In addition, there are some telling metrics to measure success from the outside-in:

  1. Customer satisfaction rate (NPS)
  2. Employee satisfaction (happiness index)
  3. Employee productivity
  4. Profitability of the service

 

Continuously improving the way teams work and collaborate and minimizing waste, variation, and complexity will result in measurable improvements in these key metrics.

 

Sharing

To create high-performing teams, team members need to understand each other while still contributing their expertise. This creates the tension between “knowing a lot about few things” and “knowing a little about a lot of things.” This is known as the T-shaped knowledge problem. To balance between the two, high-performing teams are known to spend a decent amount of time on sharing knowledge and exchanging experiences. This can take shape in many ways, like peer review, pair programming, knowledge sessions, communities of expertise, meetups, training, and internal champions that further their field with coworkers.

 

Next Up

With this contextual overview, we've learned DevOps is much more than just technology and tooling; grasping these concepts is vital for creating high-performance teams. But choosing the right approach for automation is no joke, either. There's so much to choose from, ranging from small tools that excel at one task but are harder to integrate into your toolchain to “OK” enterprise solutions that do many tasks but come pre-integrated. In the next post in this getting started with DevOps series, we'll look at why choosing a one-size-fits-all solution won't work. 

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s another interesting article from my colleague Sascha Giese on how improved communications and training can help organizations keep their infrastructure updated. Training is one of those things that’s always a priority but rarely makes it to the top of the list.

 

Government technology professionals dedicate much of their time to optimizing their IT infrastructures. So, when new policies or cultural issues arise, it can be challenging to integrate these efficiently within the existing landscape.

 

The SolarWinds IT Trends Report 2018 revealed that, in the U.K. public sector, this challenge is yet to be resolved—43% of those surveyed cited inadequate organizational strategies as the reason for the lack of optimization, followed closely by 42% who selected insufficient training investment. Let’s explore these topics further.

 

Communication Should Never Be a One-off

 

Organizational IT strategy may start at the top, but often it can get lost in translation or diluted as it’s passed down through the ranks—if it gets passed down at all. As such, IT managers might be doing their daily jobs, but they may not be working with an eye towards their agencies’ primary objectives.

 

One example of this is the use of public cloud, which—despite the Cloud First policy being introduced in 2013—is still not being realized across the U.K. government to its full potential, with less than two-thirds (61%) of central government departments having adopted any public cloud so far.

 

Agency IT leaders should consider implementing systematic methods for communicating and disseminating information, ensuring that everyone understands the challenges and opportunities and can work toward strategic goals. Messages could also then be reinforced on an ongoing basis. The key is to make sure that the U.K. government IT strategy remains top-of-mind for everyone involved and is clearly and constantly articulated from the top down.

 

Training Should Be a Team Priority

 

The IT Professionals Day 2018 survey by SolarWinds found that, globally, 44% of public sector respondents would develop their IT skillset if they had an extra hour in their workday. Travel to seminars and class tuition fees cost money that agencies may not have.

 

Training can have a remarkably positive impact on efficiency. In addition to easing the introduction of new technologies, well-trained employees know how to better respond in the case of a crisis, such as a network outage or security breach. Their expertise can save precious time and be an effective safeguard against intruders and disruption, which can be invaluable in delivering better services to the public.

 

Self-training can be just as important as agency-driven programs. It may be beneficial in the long run for technology professionals to hold themselves accountable for learning about agency objectives and how tools can help them meet those goals, supported with an allocated portion of time that professionals can use for this purpose. People don’t necessarily learn through osmosis, but through action, and at different levels.

 

For this and other education initiatives, technology professionals should use the educational allowances allocated to them by their organizations, which can sometimes run into thousands of dollars. Take the time to learn about the technologies they already have in-house, but also examine other solutions and tools that will help their departments become fully optimized, especially when these may form part of a broader public sector IT strategy.

 

Though surveys like the IT Trends Report have highlighted the existence of a knowledge and information-sharing gap, implementing stronger communication and training initiatives into government organizations could help reduce this. And by producing better-optimized environments for IT teams, the quality of the service that their departments can deliver to the wider public is increased, bringing about better changes for all.

 

Find the full article on Open Access Government.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Hello THWACKers long time no chat! Welcome to part one in a five-part series on machine learning and artificial intelligence. I figured what better place to start than in the highly contested world of ethics? You can stop reading now because we’re talking about ethics, and that’s the last thing that anyone ever wants to talk about. But before you go, know this isn’t your standard Governance, Risk, and Compliance (GRC) talk where everything is driven by and modeled by a policy that can be easily policed, defined, dictated, and followed. Why isn’t it? Because if that were true, we wouldn’t have a need for any discussion on the topic of ethics and it would merely be a discussion of policy—and who doesn’t love policy?

 

Let me start by asking you an often overlooked but important question. Does data have ethics? On its own, the simple answer is no. As an example, we have Credit Reporting Agencies (CRAs) who collect our information, like names, birthdays, payment history, and other obscure pieces of information. Independently, that information is data, which doesn’t hold, construe, or leverage ethics in any way. If I had a database loaded with all this information, it would be a largely boring dataset, at least on the surface.

 

Now let’s take the information the CRAs have, and I go to get a loan to buy a house, get car insurance, or rent an apartment. If I pass the credit check and I get the loan, the data is great. Everybody wins. But, if I’m ranked low in their scoring system and I don’t get to rent an apartment, for example, the data is bad and unethical. OK, on the surface, the information may not be unethical per se, but it can be used unethically. Sometimes (read: often) a person's credit, name, age, gender, or ethnicity will be calculated in models to label them as “more creditworthy” or “less creditworthy” in getting loans, mortgages, rent, and so on and so forth.

 

That doesn’t mean the data or the information in the table or model is ethical or unethical, but certainly claims can be made that biases (often human biases) have influenced how that information has been used.

 

This is a deep subject—how can we make sure our information can’t be used inappropriately or for evil? You’re in luck. I have a simple answer to that question: You can’t. I tried this once. I used to sell Ginsu knives and I never had to worry about them being used for evil because I put a handy disclaimer on it. Problem solved.

 

Disclaimer

 

Seems like a straightforward plan, right? That’s what happens when policy, governance, and other aspects of GRC enter into the relationship of “data.” “We can label things so people can’t use them for harm.” Well, we can label them all we want, but unless we enact censorship, we can’t STOP people from using them unethically.

 

So, what do we do about it? The hard, fast, and easy solution for anyone new to machine learning or wanting to work with artificial intelligence is: use your powers for good and not evil. I use my powers for good, but I know that a rock can be used to break a window or hurt someone (evil), but it also can be used to build roads and buildings (good). We’re not going to ban all rocks because they could possibly be used wrongly, just as we’re not going to ban everyone’s names, birthdays, and payment history because they could be misused.

 

We have to make a concerted effort to realize the impacts of our actions and find ways to better the world around us through them. There’s still so much more on this topic to even discuss, but approaching it with an open mind and realizing there is so much good we can do in the world will leave you feeling a lot happier than looking at the darkness of and worry surrounding things you cannot control.

 

Was this too deep? Probably too deep a subject for the first in this series, but it was timely and poignant to a Lightning Talk I was forced (yes, I said forced) to give on machine learning and ethics at the recent ML4ALL Machine Learning Conference.

 

ML4ALL Lightning Talk on Ethics

 

https://youtu.be/WPZd2dz5nfc?t=17238

 

Feel free to enjoy the talk here, and if you found this useful, terrifying, or awkward, let’s talk about it. I find ethics a difficult topic to discuss, mainly because people want to enforce policy on things they cannot control, especially when the bulk of the information is “public.” But the depth of classifying and changing the classification of data is best saved for another day.

In Seattle this week for the Seattle SWUG. If you're in the room reading this, then you aren’t paying attention to the presentation. So maybe during the break you should find me, say hello, and we can talk data or bacon.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

The productivity pit: how Slack is ruining work

Making me feel better about my decision to quit Slack last year.

 

Dead Facebook users could outnumber living ones within 50 years

Setting aside the idiocy of thinking Facebook will still be around in 50 years, the issue with removing deceased users from platforms such as Facebook or LinkedIn is real and not easily solved.

 

Hackers went undetected in Citrix’s internal network for six months

For anyone believing they are on top of securing data, hackers went undetected in Citrix’s internal network for six months. Six. Months.

 

Dutch central bank tested blockchain for 3 years. The results? ‘Not that positive’

One of the more realistic articles about blockchain, a company that admits it's trying, not having smashing success, and willing to keep researching. A refreshing piece when compared to the marketing fluff about blockchain curing polio.

 

Docker Hub Breach: It's Not the Numbers; It's the Reach

Thanks to advances in automation, data breaches in a place like Docker can end up resulting in breaches elsewhere. Maybe it’s time we rethink authentication. Or perhaps we rethink who we trust with our code.

 

Los Angeles 2028 Olympics budget hits $6.9B

Imagine if our society was able to privately fund $6.9B towards something like poverty, homelessness, or education instead of arranging extravagant events that cost $1,700 a ticket to attend in person.

 

A Not So Fond Look Back At Action Park, America's Scariest Amusement Park

Is it weird that after watching this video it makes me want to go to this park even more?

 

I like it when restaurants post their menu outside:

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article from my colleague Sascha Giese on strategies for digital transformation in the public sector. Our government customers here in the states have similar challenges and should benefit from the discussion.

 

For an organization like the NHS, digital transformation can present challenges, but the need for faster service delivery, cost management efficiency, and improvements to patient care make the adoption of technology a strategic priority.

 

Digital transformation refers to a business restructuring its systems and infrastructure to avoid a potential tipping point caused by older technologies and downward market influences. This transformation can also be disrupting, as it affects nearly every aspect of the organization.

 

For an organization like the U.K. NHS, this can present more challenges than for private-sector businesses.

 

Outdated infrastructure often struggles to keep up with the amount and type of data being produced, and with the volume of data the NHS processes now being supplemented by data coming in from private healthcare providers as well, the technology deployed could fall further behind. There are also growing concerns regarding management and security of this data.

 

Because of this, the NHS is in the perfect position to benefit from implementing a digital transformation strategy. No matter how small, starting now could help keep doctors away from paperwork and closer to their patients, which, at the end of the day, is what really matters.

 

For the NHS to reap the benefits of digital transformation, it’s important for IT decision makers to consider emerging technologies, such as cloud, artificial intelligence (AI), and predictive analytics.

 

Without the knowledge of how and why digital transformation can benefit the NHS, it is understandable that a recent survey from SolarWinds, conducted by iGov, found that nearly one in five NHS trusts surveyed have no digital transformation strategy, and a further 24% have only just started one.

 

Being aware is the first hurdle to overcome, and the NHS is already on its way to conquering it.

 

Getting to grips with new technology is always going to be a challenge, and even more so for those handling some of the U.K.’s most-critical data—that of our health and wellbeing—so acknowledging that legacy technology is holding the NHS back means they’re best placed to start implementing these changes.

 

Next, IT leaders should consider implementing a transformation strategy that supports these goals. Enlisting the right people from within the organization with expertise that can guide the process and implementing the best tools can help enable visibility and management throughout the whole process. Some methods to think about executing first include:

 

  • Simplifying current IT: Complexity often leads to mistakes, longer processes, and increased costs across the board.
  • Keeping IT flexible: Hybrid environments are the norm for many agencies. NHS trusts should consider technology that enables the use of private, public, or hybrid cloud, where data, workloads, and applications can be moved from one platform to another with a simple click.
  • Maintaining IT resilience: Trusts that need to run 24/7 should use systems that ensure both data availability and data protection.
  • Creating a transformational culture: Changing the culture starts at the top; if trust leaders are unwilling to consider change, it’s likely that their subordinates are also resistant.

 

With the right preparation and tools in place, the journey to digital transformation can be a positive experience for improving NHS IT solutions and can yield impressive results.

 

The healthcare industry can benefit greatly from implementing transformation strategies, so the sooner these can be integrated, the quicker we can see improvements across the board.

 

Find the full article on Building Better Healthcare.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Happy May Day! We are one-third of the way through the calendar year. Now is a good time to check on your goals for 2019 and adjust your plans as needed.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Departing a US Airport? Your Face Will Be Scanned

My last two trips out of DTW have used this technology. I had initial privacy concerns, but the tech is deployed by Border Patrol, and your data is not shared with the airline. In other words, the onus of passport control at the gate is being removed from the airlines and put into the hands of the people that should be doing the checking.

 

Password "123456" Used by 23.2 Million Users Worldwide

This is why we can’t have nice things.

 

Hacker Finds He Can Remotely Kill Car Engines After Breaking Into GPS Tracking Apps

“…he realized that all customers are given a default password of 123456 when they sign up.”

 

Some internet outages predicted for the coming month as '768k Day' approaches

The outage in 2014 was our wake-up call. If your hardware is old, and you haven’t made the necessary configuration changes, then you deserve what's coming your way.

 

Password1, Password2, Password3 no more: Microsoft drops password expiration rec

Finally, some good news about passwords and security. Microsoft will no longer force users to reset passwords after a certain amount of time.

 

Ethereum bandit makes off with $6.1M after bypassing weak private keys

Weak passwords are deployed by #blockchain developers, further eroding my confidence in this technology and the people building these systems.

 

Many Used Hard Drives Sold on eBay Still Contain Leftover Data

Good reminder to destroy your old hard drives and equipment.

 

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article on critical cyber roles within the federal government that require improved IT skills and stronger leadership.

 

The public sector faces an incredible number of cybersecurity threats and given that the government houses some of our most sensitive data, the number of attacks will continue to grow. What’s worse than the number of attacks is that, statistically, about one in three targeted attacks results in a security breach. As these breaches continue to grow more dangerous, it’s critical to identify and recruit the right personnel to ensure a stronger security posture.

 

Meeting that need head on, the Office of Personnel Management (OPM) has put out a call for data to identify the most vital cybersecurity needs across the federal agencies. Beginning in 2019, federal agencies will be required to submit reports annually through 2022. The OPM is asking agencies to:

 

  1. Identify critically needed cybersecurity roles
  2. Determine the root causes of cyber workforce shortages
  3. Develop an action plan to combat those root causes

 

While this is a great step toward stronger agency security, what can federal IT pros do today to help combat increasing threats?

 

Enhancing the Technical Team

According to a recent report by Accenture, government executives are less confident that they are successfully monitoring, identifying, and measuring breaches. In fact, most feel their current federal cybersecurity monitoring efforts are insufficient. In fact, more than half specifically mention cyberthreat analytics as a key cybersecurity gap.

 

Assume an agency already has a solid network and application monitoring platform—one that provides a unified view of all the information throughout the infrastructure. This is the most critical first step.

 

The platform by itself isn’t enough. Understand your inventory (software, hardware, tools, people); understand the data that’s being stored and passing through these systems; and shore up the team tasked with monitoring, analyzing, and acting on the data being provided. Adding more highly-skilled staff or upskilling your current team should be your first priority.

 

It can be imperative to have a security management platform that can detect anomalies or abnormalities as well as the personnel to analyze and understand the implications of the information being provided.

 

Agency Leadership

The second half of the equation for a stronger cybersecurity posture is strong cybersecurity leadership.

 

The adage that the tone of any organization comes from the top is absolutely true in the world of cybersecurity. Sound leadership should espouse good cyberhygiene and help to create a culture of cybersecurity awareness and diligence. Cybersecurity leaders should emphasize accountability and build and support a strategy to make that possible.

 

Conclusion

There are two distinct approaches when considering where your agency can enhance its hiring and personnel support: the highly technical end, where experts can identify and act on anomalies before they become threats, and the managerial end, where leaders can encourage and enable a culture of awareness, diligence, and accountability.

 

Find the full article on Government Technology Insider.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

I hope everyone had a Happy Easter this past weekend. We celebrated in the usual way, with the burning of the Christmas tree and eating our weight in ham.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Mueller report forced Congress to find PCs with disc drives

The Mueller report is a nice reminder for those of us that have tried restoring a system, but can’t find an O/S old enough for the app to run.

 

Delta is reducing how much seats recline to protect your personal space

My biggest complaint about airlines is the “business” seat doesn’t allow you to use your laptop when the person in front of you decides to recline. Here’s hoping Delta sets a new standard for everyone to follow.

 

5-star phonies: Inside the fake Amazon review complex

Fake Amazon reviews are about as surprising as inaccurate Wikipedia articles. But I like this article for their attempt at using data to show the extent of Amazon’s review problem.

 

America’s Great College Boom Is Winding Down

A handful of local schools have closed their doors; one was mid-semester. I suspect the closings are linked to the student loan bubble.

 

A ransomware attack took The Weather Channel off the air

They should film an episode of “It Could Happen Tomorrow” dedicated to malware.

 

Encryption: A cheat sheet

I found this guide to be a helpful overview and thought you may find it useful to share as well. I think it was this sentence: “Encryption won't stop your data from being stolen.” Truth.

 

How to Detect Hidden Surveillance Cameras With Your Phone

As someone that travels frequently, I’ve hit the point where I feel it necessary to scan for hidden cameras in my room.

 

Another Easter tradition for our family meal and yes, those are bacon crumbles:

 

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.