1 2 3 4 Previous Next

Geek Speak

2,549 posts

Have you ever read about TV ratings? Almost every person that watches TV has heard of the ratings produced by the Nielsen Media Research group. These statistics shape how we watch TV and decide whether or not shows are renewed for more episodes in the future.

But, how does Nielsen handle longer programs? How do they track the Super Bowl? Can they really tell how many people were tuned in for the entire event? Or who stopped watching at halftime after the commercials were finished? This particular type of tracking could let advertisers know when they want their commercials to air. And for the network broadcasting the event, it could help them figure out how much to charge during the busiest viewing times.

You might be interested to know that Nielsen tracks their programs in 15-minute increments. They can tell who was tuned in for a particular quarter-hour segment over the course of multiple hours. Nielsen has learned that monitoring the components of a TV show helps them understand the analytics behind the entire program. Understanding microtrends helps them give their customers the most complete picture possible.

Now, let's extend this type of analysis to the applications that we use. In the old days, it was easy to figure out what we needed to monitor. There were one or two servers that ran each application. If we kept an eye on those devices, we could reliably predict the performance of the software and the happiness of the users. Life was simple.

Enter virtualization. Once we started virtualizing the servers that we used to rely on for applications, we gained the ability to move those applications around. Instead of an inoperable server causing our application to be offline, we could move that application to a different system and keep it running. As virtual machines matured, we could increase performance and reliability. We could also make applications run across data centers to provide increased capabilities across geographic locations.

This all leads to the cloud. Now, virtual machines could be moved hither and yon and didn't need to be located on-prem. Instead, we just needed to create new virtual machines to stand up an application. But, even if the hardware was no longer located in our data center, we still needed to monitor what we were doing. If we couldn't monitor the hardware components, we still needed to monitor the virtual machines.

This is where our Nielsen example comes back into play. Nielsen knows how important it is to monitor the components of our infrastructure. So too must we keep an eye on the underlying components of our infrastructure. With virtual machines becoming the key components of our applications today, we must have an idea of how they are being maintained to understand how our applications are performing.

What if the component virtual machines are sitting on opposite sides of a relatively slow link? What if the database tier is in Oregon while the front-end for the application is in Virginia? Would it cause an issue if the replication between virtual machines on the back-end failed for some reason due to misconfiguration and we didn't catch it until they got out of sync? There are a multitude of things we can think about that might keep us up at night figuring out how to monitor virtual machines.

Now, amplify that mess even further with containers. The new vogue is to spin up Docker or Kubernetes containers to provide short-lived services. If you think monitoring component virtual machines is hard today, just wait until those constructs have a short life and are destroyed as fast as they are created. Now, problems can disappear before they're even found. And then they get repeated over and over again.

The key is to monitor both the application and the infrastructure constructs. But it also requires a shift in thinking. You can't just rely on SNMP to save the day yet again. You have to do the research to figure out how best to monitor not only the application software but the way it is contained in your cloud provider or data center. If you don't know what to look for, you might miss the pieces that could be critical to figuring out what's broken or, worse yet, what's causing performance issues without actually causing things to break.

What do The Guru, The Expert, The Maven, The Trailblazer, The Leading Light, The Practice Leader, The Heavyweight, The Opinion Shaper, and The Influencer all have in common? These are all other examples of what to are commonly referred to as “Thought Leaders.” Some may say it’s the latest buzzword by calling experts and influencers "Thought Leaders," but buzzword or not, Thought Leaders have been around way before the buzzword came to use.  Thought Leaders are the go-to expert among industry colleagues and peers. They are the influencers that lead direction within an organization, and sometimes they can be that leading light in your department that innovates new ideas and visions. Thought Leaders are often not in direct line of the management chain, but instead complement management and lead through example to execute vision and goals.

 

Not All Thought Leaders are the Same

The saying “One size does NOT fit all” can also refer to Thought Leadership because not all Thought Leaders are the same. Some Thought Leaders are about cutting-edge trends while others are there to inspire others. However, most Thought Leaders are experts in a field or industry and sometimes have a stance on a particular topic. They look beyond the business agenda and see the overall picture because every industry is constantly evolving. Being able to have insight in the trends and applying them to achieve and deliver results is part of the equation. You must be able to lead others and want to develop them as people not just players on a team.

When someone asks me how they can become a Thought Leader, I tell them this isn’t about you, it’s about others. When you help others by sharing your knowledge and experiences, all that other stuff will naturally come. Thought leadership status isn’t obtained through a single article or social media post on Twitter or LinkedIn. It’s something that you build your experiences and create credibility among your followers or your team at work. Experience takes time. Experience also means not only learning but listening to others. Everyone has different ideas and opinions, and being humble to listen and understand others is a critical part of the learning process. Thought Leaders don’t have all the answers and they are constantly learning themselves.

Credibility does not always mean obtaining all the latest industry certificates. While it can help, it’s not everything because having real life experiences is just as important. Someone that has all certifications in the industry but doesn’t have any applied real-world experiences will probably not get the same credibility as someone with 15+ years’ experience and fewer certifications.

Being the “Go To” person means defining trends or topics and showing your followers how they can take that knowledge to go farther with it. Once you are there it doesn’t stop either because you will need to continue to be involved and learning, otherwise your followers will eventually stop following you for guidance and that “vision.”

It’s About Others

I still get shocked sometimes when people refer to me as a Thought Leader. The reason why is because I didn’t set out to become a thought leader. What I wanted to do and still want to do is make a difference in the world and company I work for and to my coworkers and peers. I wanted to help others be successful by sharing any knowledge or skills that I may have. My hope was that by sharing my experiences others can be empowered to better themselves. Early on in my IT career, a manager gave me the best advice: sharing your knowledge will make you more valuable and it will motivate you to learn more. I have since kept that advice and use it daily. 

Back from VMworld and it's hotter here at home than in Las Vegas. I've no idea how that is possible. VMworld was a wonderful show, and it's always good to see my #vFamily.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

AWS announces Amazon RDS on VMware

There were lots of announcements last week at VMworld, but I found this one to be the most interesting. AWS and VMware are bringing the cloud to your data center. I expect Microsoft to follow suit. It would appear that all three companies are working together to control and maintain infrastructure for the entire world.

 

Earthquake early-warning system successfully sent alarm before temblor felt in Pasadena

I applaud the effort here, and hope that these systems will allow for more advanced warnings in later versions. Because alerting me 3 seconds before an earthquake strikes is not enough of a warning.

 

Two seconds to take a bite out of mobile bank fraud with Artificial Intelligence

OTOH, alerting within two seconds seems reasonable for detecting fraud, because fraud usually doesn’t involve a building falling on top of me. And this is a great example of how AI research can make the world a better place.

 

Video games that allow in-game purchases to carry Pegi warning

I think this is a great first step, but more work is needed. I’d like to see video games publish the amount of time and/or money necessary to complete the game.

 

World's Oldest Customer Complaint Goes Viral

After reading this, I do not recommend shopping at Ea-nasir’s store. Avoid.

 

Major Quantum Computing Advance Made Obsolete by Teenager

Ignore the clickbait title. The kid didn’t make anything obsolete. But he did stumble across a new model for recommendations. The likely result is that when quantum computing finally lands, researchers will be able to focus on solving real issues, like global warming, and not worry about what movies a person wants to rent next.

 

How to Roll a Strong Password with 20-Sided Dice and Fandom-Inspired Wordlists

The next time you need to rotate a password, start here.

 

Adding this to my conference survival guide:

 

Thomas LaRock and Karen Lopez fighting an octomonster to protect servers.

In my previous posts about Building a Culture of Data Protection (Overview, Development, Features, Expectations) I covered the background of building a culture.  In this post, I'll be going over the Tools, People, and Roles I recommend to successful organizations.

 

Tools

 

Given the volume and complexity of modern data systems, teams have to use requirements, design, development, test, and deployment tools to adequately protect data.  Gone are the days of "I'll write a quick script to do this; it's faster." Sure, scripts are important for automation and establishing repeatable processes.  But if you find yourself opening up your favorite text editor to do design of a database, you are starting in the wrong place. I recommend these as the minimum tool stack for data protection:

 

  • Data modeling tools for data requirements, design, development, and deployment
  • Security and vulnerability checking tools
  • Database comparison tools (may come with your data modeling tool)
  • Data comparison tools for monitoring changes to reference data as well as pre-test and post-test data states
  • Data movement and auditing tools for tracking what people are doing with data
  • Log protection tools to help ensure no one is messing with audit logs
  • Permissions auditing, including changes to existing permissions
  • Anonymous reporting tools for projects and individuals not following data protection policies

 

These tools could be locally hosted or provided as services you run against your environments.  There are many more tools and services a shop should have deployed; I've just covered the ones that I expect to see everywhere.

 

People

The people on your team should be trained in best practices for data security and privacy.  This should be regular training, since compliance and legal issues change rapidly. People should be tested on these as well, and I don't mean just during training.

 

When I worked in high-security locations, we were often tested for compliance with physical security while we went about our regular jobs. They'd send people not wearing badges to our offices asking project questions.  They would leave pages marked SECRET on the copier and printers.  They would fiddle with our desktops to see if we noticed extra equipment.  I recommend to management that they do this with virtual things like data as well.

 

As I covered in my first post, I believe people should be measured and rewarded based on their data protection actions. If there are no incentives for doing the hard stuff, it will always get pushed to "later" on task lists.

 

Roles

I'm a bit biased, but I recommend that every project have a data architect, or a portion of one.  This would be a person who is responsible for managing and reviewing data models, is an expert in the data domains being used, is rewarded for ensuring data protection requirements are validated and implemented, and is given a strong governance role for dealing for non-compliance issues.

 

Teams should also have a development DBA in order to choose the right data protection feature for ensuring data security and privacy requirements are implemented in the best way given the costs, benefits and risks associated with each option.

 

Developers should have a designated data protection contact. This could be the project lead or any developer with a security-driven mindset. This person would work with the data architect and DBA to ensure data protection is given the proper level of attention throughout the process.

 

Quality assurance teams should also have a data protection point of contact to ensure test plans adequately test security and privacy requirements.

 

All of these roles would work with enterprise security and compliance.  While every team member is responsible for data protection, designating specific individuals with these roles ensures that proper attention is given to data.

 

Finally…

Given the number of data breaches reported these days, it's clear to me that our industry has not been giving proper attention to data protection.  In this five post series, I couldn't possibly cover all the things that need to be considered let alone accomplished.  I hope it has helped your think about what your teams are doing now and how they can be better prepared to love their data better than they have in the past.

 

And speaking of preparation, I'm going to leave a plug here for my up-coming THWACKcamp session on the Seven Samurai of SQL Server Data Protection.  In this session, Thomas LaRock and I go over seven features in Windows and SQL Server that you should be using.  Don't worry if you aren't lucky enough to use SQL Server; there's plenty of data protection goodness for everyone. Plus a bit of snark, as usual.

 

I also wrote an eBook for SolarWinds called Ten Ways We Can Steal Your Data with more tips about loving your data.

 

See you at THWACKcamp!

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

With 2018 two-thirds over, federal agencies should be well into checking off the various cloud migration activities outlined in the American Technology Council’s Federal IT Modernization Report. Low-risk cloud migration projects were given clearance to commence as of April 1, and security measures and risk assessments will take place throughout the rest of the year. 

 

Agencies must remain aggressive with their cloud migration efforts yet continue to enforce and report on security measures while undergoing a significant transition. Adopting a pair of policies that take traditional monitoring a step further can help them continue operating efficiently.

 

Deep Cloud Monitoring

 

As our recent SolarWinds IT Trends survey indicates, hybrid IT and multicloud environments are becoming increasingly prevalent. Agencies are keeping some infrastructure and applications onsite while turning to different cloud providers for other types of workloads. This trend will likely continue as agencies modernize their IT systems and become more dependent on federally specific implementations of commercial cloud technologies, as called for in the ATC report.

 

A multicloud and hybrid IT approach can create challenges. For example, “blind spots” can creep in as data passes back and forth between environments, making it difficult for federal IT professionals to keep track of data in these hybrid environments. In addition, trying to manage all the data while ensuring adequate controls are in place as it moves between cloud providers and agencies can be an enormously complex and challenging operation. It can be difficult to detect anomalies or flag potential problems.

 

To address these challenges, administrators should consider investing in platforms and strategies that provide deep network monitoring across both on-premise and cloud environments. They should have the same level of awareness and visibility into data that resides on AWS or Microsoft servers as they would on their own in-house network.

 

Deep Email Monitoring

 

In addition to focusing on overall network modernization, the ATC report specifically calls out the need for shared services. In particular, the report cites moving toward cloud-based email and collaboration tools as agencies attempt to replace duplicative legacy IT systems.

 

The Air Force is leading the charge here with its transition to Microsoft Office 365, but there are inherent dangers in even a seemingly simple migration to cloud email. Witness the damage done by recent Gmail, Yahoo!, and Office 365 email outages, which caused hours of lost productivity and potentially cost organizations hundreds of millions of dollars. Lost email can also result in missed communications, which can be especially worrisome if those messages contain mission-critical and time-sensitive information.

 

Agencies should consider implementing procedures that allow their teams to monitor email paths, system state, and availability just as closely as they would any other applications operating in hybrid IT environments. Emails take different paths as they move between source and destination. Managers should closely monitor those to help ensure that the information moves between hosted providers and on-premise networks without fail. This practice can help IT professionals better understand and monitor email service quality and performance to help ensure continuous uptime.

 

The fact that there is now a clear and direct map to modern, agile, and efficient infrastructures does not necessarily make the journey any easier. Thorough strategies aimed at cloud and application or service (like email) monitoring can help agencies navigate potential hazards and help ensure seamless and safe modernization of federal information systems.

 

Find the full article on GCN.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

As anyone that has run a network of any size has surely experienced, with one alert, there is typically (but not always) a deeper issue that may or may not generate further alarms. An often overlooked correlation is that between a security event caught by a Network Security Monitor (NSM) and that of a network or service monitoring system. In certain cases, a security event will be noticed first by a network or service alarm. Due to the nature of many modern attacks, either volumetric or targeted, the goal is typically to offline a given resource. This “offlining,” usually labeled a denial of service (DoS), has a goal that is very easy to understand: make the service go away.

 

To understand how and why correlating these events is important, we need to understand what they are. In this case, the focus will be on two different attack types and how they manifest. There are significant similarities. Knowing the difference can save time and effort in working triage, whether you're dealing with a large or small outages/events.

 

Keeping that in mind, the obvious goals are:

 

1. Rooting out the cause

2. Mitigating the outage

3. Understanding the event to prevent future problems

 

For the purposes of this post, the two most common issues will be described: volumetric and targeted attacks.

 

Volumetric

This one is the most common and it gets the most press due to the sheer scope of things it can damage. At a high level, this is just a traffic flood. It’s typically nothing fancy, just a lot of traffic generated in one way or another (there are a myriad of different mechanisms for creating unwanted traffic flows), typically coming from either compromised hosts or misconfigured services like SNMP, DNS recursion, or other common protocols. The destination, or target, of the traffic is a host or set of hosts that the offender wants to knock offline.

 

Targeted

This is far more stealthy. It’s more of a scalpel, where a volumetric flood is a machete. A targeted attack is typically used to gain access to a specific service or host. It is less like a flood and more like a specific exploit pointed at a service. This type of attack usually has a different goal, gain access for recon and information collecting. Sometimes a volumetric attack is simply a smoke screen for a targeted attack. This can be very difficult to root out.

 

Given what we know about these two kinds of attacks, how can we utilize all of our data to better (and more quickly) triage the damage? Easily, actually. Knowing that an attack is occurring is fairly easy to determine in the case of volumetric attacks: traffic spikes, flatlined circuits, service degradation. In a strictly network engineering world, the problem would manifest and best case NetFlow data would be consulted in short order. Even with that particular dataset, it may not be obvious to a network engineer that there is an attack occurring. It may just appear as a large amount of UDP traffic from different sources. Given traffic levels, a single 10G connected host can drown out a 1G connected site. This type of attack can also manifest in odd ways, especially if there are link speed differences along a given path. This can look like a link failure or a latency issue when in reality it is a volumetric attack.  However, if also utilizing a tuned and maintained NSM of some kind, the traffic patterns should be readily identified as a flood, and the traffic pattern can be filtered more quickly, either by the site or the upstream ISP.

 

Targeted attacks will be very different, especially when performed on their own. This is where an NSM is critical. With attempts to compromise actual infrastructure hardware like routers and switches on a significant uptick, knowing the typical traffic patterns for your network is key. If a piece of your critical infrastructure is targeted, and it is inside of your security perimeter, your NSM should catch that and alert you to it. This is especially important in the case of your security equipment. Having your network tapped before the filtering device can greatly aid in seeing traffic with destinations of your actual perimeter. Given that there are cases of firewalls being compromised, this is a real threat. If and when that occurs, it may appear as a high load on a device, a memory allocation increase, or perhaps a set of traffic spikes, most of which a network engineer will not be concerned with as long as it is not affecting service. However, understanding the traffic patterns that led to that could help uncover a far less pleasant cause.

 

Most of these occurrences are somewhat rare, nevertheless, it is a very good habit to check all data sources when something out of the baseline occurs on a given network. Perhaps more importantly, there is no substitute for good collaboration. Having a strong, positive and ongoing working relationship between security professionals and networking engineers is a key element to making any of these occurrences less painful. In many cases of small- and medium-sized environments, these people are one and the same. But when they aren’t, collaboration at a professional level is as important and useful as the cross referencing of data sources.

If you are old enough, you might remember that commercial that played late at night and went something like this... “It’s 10 p.m., do you know where your children are?”  This was a super-short commercial that ran on late night TV in the 1980s; it always kind of creeped me out a bit.  So the title of this post is slightly different, changing the time to 2 a.m., because accessing your data is much more than just a 10 p.m. or earlier affair these days. We want access to our data 24/7/365! The premise of that commercial was all about the safety of children after “curfew” hours.  If you knew your children were asleep in their beds at 10 p.m., then you were good. If not, you had better start beating the bushes and find out where they are.  Back then you couldn’t just send a text saying “Get home now!!!” with an angry emoji .  Things are different now, we’re storing entire data centers in the cloud, so I think it’s time to look back into this creepy commercial from late night 80s TV and apply it to our data in the cloud. “It’s 2 a.m., do you know where your data is?”

 

Here are some ways we can help ensure the safety of our data and know where it is, even at 2 a.m.

 

Understanding the Cloud Hosting Agreement

Much like anything else, read the fine print!  How often do we read it?  I’m guilty of rarely reading it unless I’m buying a house or something to do with a legal matter.  But for a Cloud Hosting Agreement, you need to read the fine print and understand what it is you are gaining or losing by choosing them for your data.  I’m going to use Amazon Web Services (AWS) as an example (this is by no means an endorsement for Amazon). Amazon has done a really good job of publishing the fine print in a way that’s actually easy to read and won’t turn you blind.  Here’s an excerpt from their data privacy page on their website:

Ownership and Control of customer content:

Access: As a customer, you manage access to your content and user access to AWS services and resources. We provide an advanced set of access, encryption, and logging features to help you do this effectively (such as AWS CloudTrail). We do not access or use your content for any purpose without your consent. We never use your content or derive information from it for marketing or advertising.

Storage: You choose the AWS Region(s) in which your content is stored. We do not move or replicate your content outside of your chosen AWS Region(s) without your consent.

Security: You choose how your content is secured. We offer you strong encryption for your content in transit and at rest, and we provide you with the option to manage your own encryption keys. 

 

 

Choose Where Your Data Lives

Cloud storage companies don’t put all their eggs, or more accurately, all your eggs, in one basket. Amazon, Microsoft, and Google have data centers all over the world.  Most of them allow you to choose what region you wish to store your data in.  Here’s an example with Microsoft Azure; in the U.S. alone, Microsoft has 8 data centers from coast to coast where you data is stored at rest.  The locations are Quincy, WA; Santa Clara, CA; Cheyenne, WY; San Antonio, TX; Des Moines, IA; Chicago, IL; Blue Ridge, VA; and Boydton, VA.  AWS offers their customers the choice of which region they wish to store their data, with the assurance that they “won’t move or replicate your content outside of your chosen AWS Region(s) without your consent (https://aws.amazon.com/compliance/data-privacy-faq/).” With both of these options, it’s easy to know where your data lives at rest.

 

 

Figure 1 Microsoft Azure Data Centers in the US

 

Monitor, Monitor, Monitor

It all comes back to systems monitoring.  There are so many cloud monitoring tools out there. Find the right one for your situation and manipulate it to monitor your systems the way you want them to be monitored.  Create custom dashboards, run health checks, manage backups, make sure it’s all working how you want it to work.  If there is a feature you wish was included in your systems monitoring tool, ping the provider and let them know.  For most good companies, feedback is valued and feature requests are honestly looked at for future implementation. 

Greetings from Las Vegas! I'm at VMworld this week. If you are here, stop by the booth and say hello, grab some swag, and let's talk data.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Adding a 0 to the 3-2-1 Rule

I like this idea. By adding the 0, we are reminded that backups are useless unless they can be restored. You need to actively test your recovery process.

 

Companies Are Relaxing Their Degree Requirements to Find Top Talent

There’s another point to make here: job advertisements are often horribly written. Removing degree requirements is a good first step for 80% of job listings, but more work needs to be done by companies that want to recruit and retain top talent.

 

Google sued for tracking you, even when 'location history' is off

No means no. Unless you are Google. In which case they can do what they want until they are caught and sued. Remember when Google talked about how they weren’t evil? Good times.

 

Google is irresponsible, claims Fortnite's chief in bug row

As I was saying...

 

As customers gather for VMworld, VMware unveils new security and hybrid cloud management tools

I applaud VMware for making security a priority for their products. I’m disappointed that this security requires users to pay more. Security should be a right, not a privilege.

 

Fusion: A Collaborative Robotic Telepresence Parasite That Lives on Your Back

At first this sounded creepy, but then I thought about how I might be able to use it to slap someone remotely and now I know what to get some folks for Christmas gifts this year.

 

’Re-attaching for convenience’: nine passive-aggressive email phrases that must end now

I’d add the “I need this now,” to which you do the work and reply, and then get back an OOO message regarding their 4-week trip to Bali.

 

Obligatory selfie from the VMvillage:

 

The previous blog reviewed some guidelines that laid the foundation for security through understanding your environment and planning how elements within that environment are configured, used, accessed, and tracked. Although implementing these recommended best practices won’t make you impervious to all attacks, the concept of building a baseline can afford basic protections. This in turn can help you detect and remediate in the event of a security incident.

 

But what about arming your network with capabilities to analyze the data coming into your network? Data is dynamic; threats are dynamic and evolving. Trying to keep up to date is a daily challenge that has pushed the application of analytical methods beyond the traditionally static protections of legacy firewall rules. Even with IPS systems that have continuously updating signatures, it’s not possible to detect everything. A signature is usually a deterministic solution; it identifies things we know about. What is required? Newer methods based on some well-established formulas and principles (think statistics, linear algebra, vectors, etc.), which, by the way, are often used to help drive the creation of those signatures.

 

Here is a synopsis of the various analytics methods applicable to the threat landscape today. When looking for solutions for your own organization, understanding their capabilities will be useful.

 

Deterministic Rules-Based Analysis

  • Uses known formulas and behaviors applied as rule-sets to evaluate inputs such as packet headers and data.
  • There is no randomness introduced in the evaluation criteria. Analysis is reactive in nature, allowing for the detection of known attack patterns.
  • The output is fully determined by the parameter values and the initial conditions.
  • Examples:
    • Protocols behave in an RFC defined fashion; they use well-known ports and over follow a known set of flows
    • When establishing a TCP connection, look for the three-way handshake
    • Stateful firewalling expects certain parameters: TCP flags and sequence numbers
    • Behaviors that map to known signatures

 

Heuristic Analysis

  • Heuristics are often used to detect events in the form of a rule-based engine that can perform functions on data in real time.
  • A heuristic engine goes beyond looking for known patterns by sandboxing a file and executing embedded instructions. It can also examine constructs such as processes and structures in memory, metadata, and the payload of packets.
  • The advantage of heuristic analysis is detection of variants of both existing and potential zero-day attacks.
  • Heuristics are often combined with other techniques, such as signature detection and reputation analysis, to increase the fidelity of results.
  • Heuristics can perform contextual grouping.
    • Example: Activities performed at certain times of the day or year, detection of behavioral shifts to detect new viruses.

 

Statistical Analysis

  • Statistical analysis is often used in anomaly detection.
  • The goal is to identify some traffic parameters that vary significantly from the normal behavior or “baseline.”
  • There are two main classes of statistical procedures for data analysis and anomaly detection:
    • The first class is based on applying volumetrics to individual data points. There is some expected level of variation between telemetry information and baseline; any deviation beyond certain thresholds is defined as anomalous.
      • Example: Volumetric outlier detection matching some know threat vector behavior – distinct, max, total, entropy
    • The second class measures the changes in distribution by windowing the data and counting the number of events or data points to determine anomalies.
      • Example: Attack characterization of malware – series of exchanges between host and CnC of small packet sizes, seen “over time” - skewness
  • Methods must be chosen to reduce false positives and produce a confidence interval or probability score (>< 0.5).

 

Data Science Analysis – Machine Learning

  • The process of collecting, correlating, organizing, and analyzing large sets of data to discover patterns and other useful information like relationships between variables.
  • The sheer volume of data and the different formats of the data (structured and unstructured) that are collected across multiple telemetry sources is what characterize “Big Data.”
    • Structured Data - resides in a fixed field within a record, transaction, or file, and lends itself to classification.
    • Unstructured Data - webpages, PDF files, presentations, and email attachments – data that is not easily classified.
  • The large volume of data is analyzed using specialized systems, algorithms, and methods for predictive analytics, data mining, forecasting, and optimization:
    • Supervised Machine Learning - analysis takes place on a labeled set of data inputs and tries to infer a function (by way of a learner) based on pre-defined outputs. One common technique is walking a decision tree to classify data.
      • Example: SPAM filtering examining header info, misspellings, key words.
    • Unsupervised Machine Learning - looks for patterns and groupings in input/unlabeled data, determines and attempts to infer a relationship between objects. Lends itself to predictive capabilities and includes methods such as clustering.
      • Example: Discovery of a new variation of a family of malware.

 

We’ve covered a lot of concepts in this series. In the final blog, we will look at security monitoring best practices and see how our knowledge of the theoretical can help us be practical in the real world.

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

The Cloud First policy is well known throughout the U.K. public sector. It is an important tenant of the government’s digitalization initiative, and a wider push to be “Cloud Native.” To guide this, the Government Digital Service (GDS) published an advice-driven blog post, in which it suggested that IT teams should create “resilient, flexible, and API-driven” applications. At the same time, the GDS is encouraging any staff in defense, government, or the NHS to trial new Software-as-a-Service (SaaS) applications.

 

It’s a significant statement of the government’s intent. Yet, with over £2.6B spent on cloud and digital services over the last five years, adoption remains comparatively low. One might expect that more than 30% of NHS and 61% of central government entities would have adopted some level of public cloud, which were the findings of a recent FOI request conducted by SolarWinds. Even the Ministry of Defence (MOD), which has adopted some public cloud, stated it had migrated less than 25% of its architecture.

 

Legacy Technology

The NHS, central government, and the MOD have all previously made significant investments in infrastructures, which have inadvertently created a legacy technology environment. This technology now forms a barrier to public cloud adoption for 65% of central government organizations and 57% of NHS trusts. Existing licenses for vendor-specific solutions are creating a sense of vendor lock-in, as organizations feel the need to justify their previous investment before adopting cloud technology.

 

IT directors in the public sector should take stock of their digital infrastructure and investments. With the whole landscape in mind, the question to ask is: “Are these delivering the flexibility and cost-efficiency we need?” The answer for many is likely to be “I’m not sure.”

 

This lack of transparency stems from an absence of visibility into technology performance. Many NHS trusts (77%) and central government organizations (55%) are either unsure if they are using the same monitoring tools across their whole infrastructure or are using different tools for on-premises and cloud environments. IT departments need to consider how they regain visibility across these disparate systems. Overarching measurement and monitoring tools will likely form a significant part of this.

 

Security

Security also remains a consideration. NHS Digital only provided guidance in January 2018, affirming public cloud’s suitability for patient data. This delay may account for a significant portion of the security mistrust around the cloud plaguing 61% of NHS trusts, according to a recent FOI request made by SolarWinds. However, security and compliance also remain concerns for central government as well as the MOD, although at a much lower 39%.

 

To this end, the U.K. Government and National Cyber Security Centre has issued overarching guidelines on cloud security. However, these advisory measures do not go far enough to reassure public sector organizations that the public cloud is secure. It’s easy to understand why the public sector remains reticent about the cloud. Given recent high-profile security breaches, any organization would want reassurance.

 

Next Steps and Solutions

Much like the implementation of the Cloud First policy overall, it is all trust and little verification. While the government may lay out best practices, there is no real initiative in place to check that these are being followed. In this regard, the GDS may stand to gain from a look across the pond. The Federal Risk and Authorization Management Program (FedRAMP) in the U.S. provides one approach to security across the public sector. With a preapproved pool of cloud service providers, the public sector can easily find trusted, secure solutions. This helps make adoption of cloud services simpler and shifts the conversation from security and assurances to innovation and meeting business needs.

 

At the same time, IT providers need to make the transition as easy as possible for the public sector. A crucial part of this is monitoring tools capable of working across both a legacy and cloud environment. Using many different monitoring tools may make it difficult to create a cohesive picture of the whole IT environment. With 48% of the NHS and 53% of central government using four or more monitoring tools, this appears to be very much the case in the public sector. Technology providers need to help IT departments overcome this with solutions that are designed to link legacy and new systems into one environment. This will be integral for converting public cloud investment into demonstrable ROI.

 

Proactive steps are needed to address the uncertainty around the use of public cloud in the public sector. Without them, the U.K. will struggle to make the most of new cloud-centric technologies. Embracing the cloud is critical. Without it, public sector organizations may find themselves struggling in the face of cyberattacks, downtime, and costly maintenance, all risk associated with a legacy IT environment.

 

Find the full article on GovTech Leaders.

When you go on a trip somewhere, you always take a map. When I was growing up, that map came in the form of a road atlas that was bigger than I was. I remember spending hours in the car with my parents tracing out all the different ways that we could get from where we were to where we wanted to be.

Today, my kids are entertained by GPS and navigation systems that have automated the process. Instead of spending time drawing out the lines between cities and wondering how to get around, they can simply put a destination into the application and get a turn-by-turn route to where they want to go. Automation has made a formerly difficult process very simple.

In much the same way, many systems professionals find themselves mapping out applications the "old fashioned" way. They work backwards to figure out how applications behave and what resources they need to operate effectively. While it's not hard to find dependencies for major things, like database access and user interfaces, sometimes things slip through the cracks. One of my favorite stories is about a big Amazon Web Services outage in 2017 that saw the AWS status lights continually lit green because they were hosted on AWS and were inaccessible during the outage. Status lights aren't a critical function of AWS, but it is a great example of how manual application dependency mapping can cause unforeseen problems.

Thankfully, the amount of data generated by applications today allows for solutions that can automatically detect and map applications and infrastructure. Much like the change from road atlases to GPS, the amount of good data at the disposal of application developers allows them to build better maps of the environment. This application mapping helps professionals figure out where things are and how best to identify problems with applications on a global scale. This could mean dependencies located in one availability zone are affecting application performance on the other side of the country, or even on the other side of the world.

How can we take full advantage of these application maps? For that, we really need context. Think back to our GPS example. GPS in and of itself doesn't do much more than pinpoint you on a map. The value in GPS is that the data feeds applications that provide turn-by-turn navigation. These navigation systems are the context on top of the location data. Rather than simply printing out directions from a website, the app can take us right to the front door of our location.

So too can contextual application mapping help us when we need to troubleshoot bigger issues. By offering additional data about applications above and beyond simple port mapping or server locations, we can provide even more rich data to help figure out problems. Think about a map that shows all the availability zones on a map that then overlays things like severe weather alerts or winter weather advisories. The additional context can help understand how mesoscale issues can affect service and application availability.

There are tons of other ways that contextual application mapping can help you troubleshoot more effectively. What are some of your favorite examples? What would you like to see tools do that would help you figure out those unsolvable problems? Leave a comment below with your thoughts!

White cubes with black letters spelling out "MY DATA" on black background

We've talked about building a culture, why it applies to all data environments, and some specific types of data protection features you should be considering.  Today, we'll be considering the culture of protection the actual owners of the data (customers, employees, vendors, financial partners, etc.) expect from your stewardship of their data.

 

Data owners expect you will:

 

  • Know what data you collect
  • Know the purpose for which you collected it
  • Tell them the purposes for which you collected the data
  • Be appropriately transparent about data uses and protection
  • Use skilled data professionals to architect and design data protection features
  • Document those purposes so that future users can understand
  • Honor the purposes for which you collected it and not exceed those reasons
  • Categorize the data for its sensitivity and compliance requirements
  • Document those categorizations
  • Track data changes
  • Audit data changes
  • Version reference data
  • Use strong data governance practices throughout
  • Protect non-production environments just as well as production environments
  • Prioritize data defect fixes
  • Make the metadata that describes the data easily available to all users of the data
  • Know the sources and provenance of data used to enhance their data
  • Secure the data as close as possible to the data at rest so that all access, via any means, provides the most security
  • Mask the data where needed so that unintentionally disclosure is mitigated
  • Back up the data so that it's there for the customer's use
  • Secure your backups so that it's not there for bad actors to use
  • Limit access to data to just those who have a need to know, know it
  • Immediately remove access to their data when staff leaves
  • Do background checks, where allowed, on staff accessing data
  • Test users of data regularly on good data hygiene practices
  • Ensure data quality so that processes provide the right outcomes
  • Ensure applications and other transformations are done correctly
  • Ensure applications and other transformation do not unintentionally apply biases to outcomes of using their data
  • Provide data owners access to review their data
  • Provide data owners the ability to request corrections to their data
  • Provide data owners the ability to have their data removed from your systems
  • Monitor third-party data processors for compliance with your data security requirements
  • Secure the data throughout the processing stream
  • Secure the data even when it is printed or published
  • Secure data even on mobile devices
  • Use strong authentication methods and tools
  • Monitor export and transfer of data outside its normal storage locations
  • Train IT and business users on security and privacy methods and tools
  • Protect user systems from bad actors
  • Monitor uses of sensitive data
  • Monitor systems for exploits, intrusion attempts, and other security risks
  • Securely dispose of storage hardware so that data is protected
  • Securely remove data when its lifecycle comes to an end
  • Accurately report data mis-uses and breaches
  • Treat their data as well as you'd protect your own

 

And after all that:

 

  • Actively steward the data, metadata, and data governance processes as business and compliance requirements change

 

Sound overwhelming? It should. We need to think of data as its own product. With a product manager, data models, metadata repository, a business user portal about the data products, and all the process that we put in place to protect code. Reread the list, changing the word data to code. We do most of this already for applications and other code. We should, at the very least, provide the same sort of process for data.

 

Your customer might not know they need all those things, but they sure expect them. I'd love to hear other expectations based on your own experiences.

 

“Hello! My Name is Phoummala and I have re-occurring imposter syndrome. I struggle with this frequently and it’s a battle that I know I am not alone in.”

What is Imposter Syndrome?

Impostor syndrome is a feeling of self-doubt, insecurity, or fraudulence despite often overwhelming evidence to the contrary. Imposter syndrome can occur repeatedly. It can affect anyone and does not discriminate.  The smart successful “rock stars” in your industry to the everyday employee, that feeling of being a fraud and someday someone is going to find that you don’t know anything: it lives in more people’s heads more than you realize. YES, it’s that ugly voice in your head telling you that don’t deserve that promotion or that you can’t do something because everybody is way smarter than you. If you’ve experienced that voice, then you at some point have had imposter syndrome. This blog post is not about curing imposter syndrome, because honestly as humans I do not think we can, but we can overcome it when that negative voice in our head pops up again.

There has been a lot of research on why imposter syndrome happens, and experts point to various reasons ranging from more intense competition in the workplace, increasing professional success, to low self-esteem and other contributing background factors. I can list off all the research gone into this, but at the end of the day we know it happens and we need to beat it to continue to be successful in our lives.

Beating Imposter Syndrome

I am not a healthcare professional. These are simply suggestions from what I have used to beat my imposter syndrome. If you are currently suffering from this and it is severely affecting your life, such as depression, I strongly recommend speaking to someone in the medical field.

  1. Acknowledgement - The first step to overcoming anything is to acknowledge it, like overcoming addictions and other conditions we need to acknowledge that these thoughts exist. Once you’ve done this, you can tell that ugly voice it needs to go away so you can take control of your own self-power.
  2. Talk – Now that you’ve acknowledged it, talk about it. Find someone that you can trust and talk about your feelings. Yes, we can talk about feelings in IT. Talking to someone can release so much stress and pressure. The key here is to talk to someone that is in within your support circle and not someone that is going to talk negative. We want positive thoughts here!
  3. Reaffirm – When my imposter syndrome kicks in, I tell myself all the good things I’ve done and list off successes. Go through your accomplishments and celebrate each one because you wouldn’t have gotten those if you didn’t do something great. Another tip is to record your successes somewhere, perhaps a journal or notebook in your desk, and each time you start to feel bad, pull out that list and read out the accomplishments.
  4. Strike a Pose – Yes, you read that correctly, strike a pose--a power pose, that is. Power poses can help gain your self-power and help you feel more powerful. Yes, there is such a thing as power poses. Thanks to my friend Jeramiah that introduced me to this Ted Talk by Amy Cuddy. Amy is a Harvard Business School professor and social psychologist, basically an expert in her area—and she has experienced imposter syndrome herself! Amy talks about how our body language affects how we see ourselves and how others see us. With simple changes in our posture and body language, you can start to feel confident and powerful. Sitting straight and upright, no slouching, and doing the Wonder Woman pose all help us feel more powerful. To find out more about power poses, check out her Ted Talk .
  5. Fake it – There are so many variations of the saying “Fake it …” but this does work. You can trick yourself into thinking you can do it until you become it. Amy Cuddy’s Ted Talk goes into deep discussion about this and she believes that it’s possible to fake feelings and gain that power back until we truly feel more powerful. “Don’t fake it ‘til you make it. Fake it ‘til you become it.” Her Ted Talk is amazing and I highly recommend it to everyone.

 

 

We are Humans

I cannot stress enough that we are humans and we cannot be expected to know everything. It’s just not possible. It’s also okay to feel what you feel. Without emotions, we aren’t living a real life and that’s what makes us humans so unique.  So, when you start to hear that ugly voice telling yourself that you suck or that you’re a fraud, tell yourself, “It’s okay” that I felt this way and I won’t let it continue because I belong here.  Believe in yourself!

 

Getting ready for VMworld next week, working on my presentations, demos, and buying new shoes. If you are in Vegas for the show, stop by the booth to grab some swag and say hello.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

An 11-year-old changed election results on a replica Florida state website in under 10 minutes

This either says a lot about the election websites, or the skills of the kids. Probably a little bit of both.

 

LHC physicists embrace brute-force approach to particle hunt

Nice example of not being afraid to admit what you have been doing isn’t working, and to try something different. The LHC is an example of a project that will benefit from quantum computing, look for that to be the next step in their research.

 

Tesla's stock falls sharply after Elon Musk reveals 'excruciating' year

So, we all see that Musk is having a meltdown, right? Might be time for him to cut back on the number of projects he’s trying to run at this time.

 

Telling the Truth About Defects in Technology Should Never, Ever, Ever Be Illegal. EVER.

Agreed. I recognize that in some cases you’d like to notify the company directly and give them a chance to patch. But to discourage such research isn’t helpful. I’d like to think programs like bug bounties make for a more secure online experience overall, for everyone.

 

Hackers steal $13.5 million from Indian bank in global attack

If only there was a way for this bank to have known there was a defect in their software. Oh wait, there was a warning. My guess is that this bank might understand the benefits of encouraging others to find security flaws before the criminals.

 

Make Shadow IT a Force for Good

Good list of tips for everyone that has struggled with Shadow IT. I’m a huge advocate of zero-trust networks, and I think that’s got to be the default for every company these days.

 

My Favorite Sayings

Well, not mine, but ones I enjoyed reading and thought you would, too.

 

10 years as a Microsoft MVP and I'm just as giddy as if it was my first day. My 10-year ring arrived over the weekend, and I wanted to share:

 

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

There is so much involved in improving agency performance that it can be difficult to pinpoint one thing—to make one recommendation—as a starting point toward performance enhancement.

 

That said, technology plays an enormous role, according to the 2017 SolarWinds Federal Cybersecurity Survey of 200 federal government IT decision makers and influencers. The results indicate that government agencies perform their missions more adeptly when they incorporate strong IT controls into their business processes.

 

Managing Risk

 

According to the survey, nearly 80% of respondents describe their agency’s ability to provide managers and auditors with evidence of appropriate IT controls as good or excellent. Additionally, well more than half of respondents say they have updated policies, procedures, and technology, and that reports are generated on a regular basis. That’s good news.

 

It turns out that being able to provide evidence of IT controls is a strong contributing factor to managing risk, which improves performance.

 

The study identified several other factors that contributed to an agency’s ability to manage risk. Those respondents that rated their agency’s ability to provide evidence of IT controls as “excellent” note that the following have helped contribute to their agency’s success in managing risk.

 

IT modernization (61%)

Tools to monitor and report risk (57%)

Network optimization (54%)

Data center optimization (48%)

Interestingly, significantly more defense than civilian respondents indicate IT modernization contributed to successfully managing risk—51% versus 37%.

 

In terms of the role regulations and mandates play in managing risk, more than half of respondents that indicate regulations helped with that effort cite both the Risk Management Framework and the NIST Framework for Improving Critical Infrastructure Cybersecurity as a positive contributing factor.

 

Enhanced Security

 

The presence of strong IT controls can also help federal IT pros more quickly identify security events and enhance network and application security—again, helping to improve performance.

 

According to the survey, of those respondents that described their agency’s ability to provide managers and auditors with evidence of appropriate IT controls as “excellent,” 59% are able to identify rogue devices on the network and inappropriate internet access by insiders within minutes. More than half are able to identify distributed denial of service attacks or a misuse/abuse of credentials within the same short timeframe.

 

A significantly greater number of respondents from DoD agencies—versus civilian agencies—said their agency can detect a misuse/abuse of credential within minutes.

 

Finally, 61% of those respondents that describe their agency’s ability to provide managers and auditors with evidence of appropriate IT controls as “excellent” rate the effectiveness of endpoint security software and network access control (NAC) solutions as “high.” Respondents describing their agency’s IT controls as good or poor reported these solutions as far less effective.

 

Other tools that federal IT pros from agencies with “excellent” IT controls identified as highly effective were:

 

Configuration management software (57%)

Web application security tools (56%)

Patch management software (54%)

File integrity monitoring software (52%)

SIEM software (52%)

 

Conclusion

 

The majority of DoD respondents credited IT tools that enabled them to monitor and report risk for improving how they managed and mitigated security threats. Three-fourths of respondents said federal agencies are more proactive today than five years ago concerning IT security—including an ability today to detect rogue devices on government networks within minutes.

 

While it can be difficult to pinpoint one thing that can help enhance agency performance, having strong IT controls and monitoring tools is certainly a great place to start.

 

Find the full article on SIGNAL.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.