Skip navigation
1 2 3 4 Previous Next

Geek Speak

2,386 posts

If you work in an organization that is publicly traded, or you are subject to other government regulations, you will be required to perform a periodic network audit. Even if an external entity doesn’t require an audit, it's a good idea to review your network to ensure your environment meets current standards.


In a large network, a full audit may seem like an overwhelming task, but a few regular practices can help at audit time. As with most things, a successful audit begins with planning. The overall health and manageability of your network will dramatically reduce your audit and remediation efforts.


Let’s get a few basics out of the way. You will need an up-to-date inventory of all the devices on your network. There are many ways to maintain this inventory, ranging from a low-tech spreadsheet to a fully automated inventory tool. Regardless of the solution you use, you should track details like hostname, management IP, serial number, and code version. You should keep basic configuration standards and templates to which your device configurations can be compared. Proper standards, correctly defined and applied, will enforce a minimum standard and will help you stay compliant throughout the year.


As you consider your networking standards, appropriate logging is a must. You will need a centralized logging server which will collect syslog from your devices. There are many open source and commercial logging options. Minimally, you will need to log invalid login attempts, hardware failures, and adjacency changes. Firewalls should log ACL denies and other security events. As you build a logging infrastructure, you can add tools to parse logs and alert on relevant events. You will need to carefully control who has write access to your log data. Check internal and external adit standards for retention requirements. Many organizations need to save audit logs for 7 years or more.


In recent years, wireless networks have become a sticking point in network audits. As compared to wired networks, wireless infrastructure has been changing rapidly, the number of devices have proliferated, and vulnerabilities for legacy wireless abound. Many organizations implemented wireless quickly with a pre-shared key without considering the security or audit consequences. In order to meet current wireless security audit requirements, your wireless network will need to authenticate via 802.1X with user credentials or a unique certificate managed by PKI. You will need to log all access to the wireless network.  In addition, some standards may require wireless technology to perform roque detection and mitigation.


Its important to remember the purpose of a network audit. Typically, an audit measures how well you comply with established controls. Even if your network is, in your estimation, well run, secure, and well documented, you can fail an audit if you do not comply with established policies and procedures. If you have responsibility for ensuring your network audit, get clear guidance on the audit standards and review any policies and procedures that apply.


Network audits can be challenging and labor intensive. However, with careful planning, the right tools, and diligence over time, the task will become much simpler.

Anyone who has looked at the number of event IDs assigned to Windows events has probably felt overwhelmed. In the last blog, we looked at some best practices events that are a great start to providing contextual data in the event of a security breach. For example, repeated login failures, attempted privilege escalations, and attempts to manipulate registry entries are all going to provide useful forensic data during incident response. In this blog, let’s broaden our horizons. The fact that event IDs exist in several sources beyond Microsoft-Windows-Security-Audit allows us to be more proactive and have a better understanding of how our users and systems interact. It’s all about building a baseline of what is normal and recognizing potential threats and IoC’s as sequences of event IDs.


Here are some additional event sources and useful IDs that will help increase visibility through a better understanding of network assets and how they are used:






Use Case


Event Log was Cleared


Audit Log was Cleared





Attempt to hide malicious activities


App Error


App Hang




Application Error


Application Hang

Attempt by malware to interact with an application


AppLocker Block


AppLocker Warning

8003, 8004


8006, 8007


Attempt to violate usage policy on the system



Windows Service Fails/Crashes

7022, 7023, 7024, 7026, 7031, 7032, 7034

Service Control Manager

Repeated failures on the same systems, particularly high value assets could indicate a compromise


Detected an invalid image hash of an image file


Detected an invalid page hash of an image file


Code Integrity Check




Failed Kernel Driver Loading









3001, 3002, 3003, 3004, 3010, 3023













Insertion of malicious code or drivers into the kernel



New Kernel Filter Driver



New Windows Service







Service Control Manager

Changes to software on critical assets outside of scheduled maintenance windows


Detected Malware


Action on Malware Failed








Between these events and the list of account, process and firewall events from the previous blog, you know have a well-rounded security policy.



Note that this is by no means an exhaustive list. The idea is to get started with a set of events to build a threat detection policy. There are several other types of events such as those that report on wireless usage, mobile device activities, and print services. These events can be tracked for audit, billing, and resource utilization purposes and periodically archived.


The next step is to define how to apply our security policy and to be prepared to tune it over time. To facilitate this process, create a checklist for your environment:

  • Identify which events should be monitored for critical assets versus all endpoints
  • Identify those events that should be tracked at various times of the day
  • Identify events that should trigger an immediate action, such as send an email to an admin; these may be tied to a task directly from the Windows Event Viewer or a series of events can be configured as triggers for alerts via the Task Scheduler.


Schedule an event driven task


  • If I don’t use features such as AppLocker or Defender, should I investigate their applicability?

Using Applocker


  • Understand which events should be prioritized in terms of reaction and remediation.
  • What events or combinations of events are considered alerts, and which are providing contextual information?
  • What events need to be tied to thresholds that make a clear distinction between normal versus threat?
  • Do I need to have a policy for threat detection and a process for collecting audit information?

Having a clear plan for your security policy will help overcome what is and the biggest issue with all logging and alerting; large numbers of false positives that waste time and resources and possibly detract from a real threat.


Recommended Reference:

Spotting the Adversary Through Windows Event Log Monitoring

By Jamie Hynds, SolarWinds Security Product Manager


Last year, the White House issued an Executive Order designed to strengthen cybersecurity efforts within federal agencies. The EO requires agencies to adhere to the National Institute of Standards and Technology's (NIST) Framework for Improving Critical Infrastructure Cybersecurity, popularly known as "the framework."


Henceforth, the agencies are expected to follow a five-step process:


  • Identify
  • Protect
  • Detect
  • Respond
  • Recover

This creates near-term challenges with potentially long-term benefits. In the near term, agencies will need to implement the framework right away and modernize their security and IT infrastructures, without the benefit of additional funding or resources. Over the long-term, their actions will likely improve security and result in cost savings. But they must overcome some significant hurdles first.


The frame receives mixed reviews

A recent SolarWinds survey of federal IT professionals found a mixed view of the framework. While 51 percent of respondents claimed that the framework contributed to their agencies' successes, 38 percent stated that it posted a challenge to managing risk.


In addition, while 55 percent of respondents felt that the framework has succeeded in promoting a dialogue around risk management, 38 percent felt that the framework remains misunderstood.


However, we also found a pearl of wisdom that agency IT professionals, faced with the new EO, can grasp. More than 8 out of 10 respondents indicated that their agencies are at least somewhat mature in each of the framework's five-step areas, although respond and recover remain relatively weak.


That maturity appears to tie direction into the types of controls these agencies are using. When asked about the speed at which their systems can detect security threats, respondents, who felt their security controls were either excellent or good, indicated that could more quickly respond to network threats than ones rating their controls as fair or poor.


Modern systems provide a solid foundation

The message is clear. Agencies with modern, robust systems and processes have set themselves up for security success. The are well on their way toward building a solid foundation upon which they can implement and follow the framework's five-step process.


What makes them so different? Let's take a look at each of the five steps and explore some of the solutions they are using to eliminate any weak links they might have in their networks.



In this first step, administrators and security managers must look at the risk landscape and ascertain where threats may materialize. They have to consider all of the various aspects that could pose threats to their networks, including devices, applications and servers.


Organizations with robust network network monitoring policies tend to do better in this area because they have more effective risk management planning support. Notably, survey respondents highlighted file integrity monitoring and SIEM tools as effective security solutions, while 46 percent states "tools to monitor and report risk" have contributed to successful risk management.



This is all about implementing appropriate security controls and safeguards. The idea is to contain or completely mitigate the impact of a threat event.


Our survey respondents mentioned a variety of solutions that helped improve their protection efforts. They specifically called out patch management and network configuration management tools as useful threat deterrents. Other approaches, including log and network performance monitoring, can be used to help ensure that these controls are working as expected and generate reports to prove their efficacy. This is important, as detailed reporting is another required component of the president's EO.



Detection involves identifying the occurrence of a cybersecurity event. Administrators must have the means to discover and track anomalies and suspicious network activity.


The framework itself specifically calls for continuous monitoring as a component of this step. Log and event and security information management fall under this category. Administrators should also consider implementing traffic analysis procedures that can alert teams to irregular traffic patterns and provide deep forensics information on network activity.


Respond and recover

Let's lump these last two steps together, since 12 percent of respondents indicated that their agencies were "not at all mature" in each of these two areas. They feel their organizations are great at detection, but lack the ability to quickly respond to and recover from attacks.


In terms of response, log and security event management products have proven beneficial. Once a threat is detected, they can immediately block IPs, stop services, disable users, and more. All of these steps can effectively contain and limit potential damage.


Still, not every attack will be denied, and agencies must step up their disaster recover efforts in the event of a successful threat. Taking days to recover from an attack, similar to the one that took place across the world earlier this year, is simply not an option. In such cases, network configuration management solutions can be used to backup device configurations for easy recovery in the event that the system goes down, greatly reducing recovery times.


Reducing complexity is key

"Easy" is not a word that has been traditionally associated with the framework or other regulations and mandates. Indeed, one survey respondent remarked that the "complexity of regulatory frameworks adds to the challenges."


Fortunately, the government recently took steps to alleviate some of the complexities associated with the framework by issuing a draft of NIST 1.1, which, according to NIST, is designed to "clarify, refine, and enhance the Cyber-security Framework, amplifying its value and making it easier to use." The draft clarifies much of the language surrounding security measurement and provides additional security guidance, with the goal of making thing simpler and clearer for those using the framework. It is a step in the right direction. Added complexity is the last thing that agencies need as they deal with rising and more sophisticated threat vectors from enterprising hackers and the mistakes made by careless insiders.


While the government works to improve the framework, administrators can continue to do their part to reduce much of the complexity and many of the challenges involved in following its step-by-step process. Implementing modern tools and policies throughout the framework's five steps created a solid support structure for threat identification, protection, defense, response, and recovery.


Find the article on Federal News Radio

This edition of The Actuator has a handful of data privacy and security links. It would seem that as the world becomes more connected, as we all share data freely, we become a little less safe. Our trusting nature opens us up to attacks, often from places that we need to be safe (such as security patches). I'm seeing a rise in the number of articles on the topic of data security and privacy, but that may be a result of outlets knowing what stories are getting attention.


The same thing happened with shark attacks. For a while, you would have thought the sharks were massing for a hostile takeover. Then you realize that they have always been there, you just had to know where to look.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Corporate Surveillance in Everyday Life

A bit long, but worth digesting over time. The key takeaway for everyone here is this: Data you think is private isn’t. It is best to assume that everything you do is tracked. Those bits reveal your identity in ways you may not agree with or like.


Dell is considering a sale to VMware in what may be tech's biggest deal ever

Didn’t Dell just buy EMC, who owned VMware? It’s as if you bought a car, and it came with Sirius XM for two years, and before the two years were done Sirius XM bought your car back from you. Crazy.


The Ability to Fake Voice and Video is About to Change Everything

This is some deep, dark, data privacy stuff here, folks. If there ever was a need for blockchain to show actual value for humanity, now’s the time.


Ransomware Outlook: 542 Crypto-Lockers and Counting

Consider this a reminder to make three backups of your data, right now, before you are a victim.


U.S. Ports Take Baby Steps in Automation as Rest of the World Sprints

Once companies start to recognize the money they can save with automation, including AI, then we will be closer to letting the machines take over the tasks they do better, freeing humans up to handle the more human tasks.


WHATIS Going to Happen With WHOIS?

This is going to get messy and all I can think is, What Would Dalton Say? ”It’ll get worse before it gets better." Then again, if I can save money on private registrations for my domains, I’m good with that.


New Mac crypto miner distributed via a MacUpdate hack

This is why we can’t have nice things. When hackers make us uncertain about applying updates, I stop and think we should just unplug everything.


I feel bad about all the data privacy and security links so here's a picture of some bacon I enjoyed recently at The Rooster Co. You're welcome:

By Paul Parker, SolarWinds Federal & National Government Chief Technologist


Modernizing legacy systems is back on the front burner. Here's an interesting article from my colleague, Joe Kim, in which he reminds us not to forget about the network.


The Modernizing Government Technology (MGT) Act was signed into law last year and there’s one bipartisan truth that everyone can agree on: Legacy technologies are keeping agencies’ IT operations from being efficient, reliable, and secure.


But while the MGT Act focuses primarily on the need to upgrade individual IT systems, agencies should start their modernization initiatives at the network level.


Many of the network technologies that government agencies have used for years no longer cut it in today’s cloud-driven world. Many of these systems are not secure, and are unsuitable for current and future network demands.


Let’s take a look at some ways legacy IT networks are holding us back, and how modern network architectures can help propel us forward.


Modern security for today’s challenges

Outdated networks must be modernized for better efficiency and to be able to detect, defend, and evolve against today’s cyber threats. Managers must explore creating modern and automated networks that are software defined. This will give managers with automated platforms the ability to detect and alert to potential security risks. An automated network can also serve as a flexible infrastructure designed to adapt to future needs.


Greater flexibility for now and the future

Unlike legacy IT networks, modern, software-defined network architectures are built on open standards; thus, they are more flexible and can easily scale depending on the needs and demands of the agency.


Today’s networks cannot be rigid. Managers must be able to automatically scale them up or down as needed. Networks must be adaptable enough to accommodate usage spikes and changing bandwidth requirements, so bottlenecks are eliminated and five nines of availability is maintained.


Open, software-defined network architectures allow for this, while also enabling managers to deploy different solutions to suit their needs and budgets. Agencies may be using a combination of services and solutions, and it’s not uncommon for agencies to use a hybrid IT infrastructure that includes technologies from different vendors.


Insight into hybrid infrastructures

Hybrid IT networks are becoming more commonplace in the public sector. In fact, a recent SolarWinds report indicates that a majority of government agencies surveyed are moving to the hybrid model.


Managers must investigate and consider deploying solutions that can provide insight into the network operations and applications wherever they may reside, both on- and off-premises. They need to be able to see through the blind spots that exist where their data moves between their hosting providers and on-site infrastructures. Having this complete and unimpeded perspective is critical to maintaining reliable, fast, and well-protected networks.


These networks are the beating hearts of federal IT, and while there’s little doubt that individual hardware components must be updated, we cannot forget about the infrastructure that drives all of these pieces. While the MGT Act is a step in the right direction, none of us should lose sight of the fact that it is network modernization that will ultimately drive better security, reliability, and efficiency.


Find the full article on Federal News Radio.


Extending IT Experience

Posted by kpe Jan 31, 2018

Working in IT? Feeling lost given everything that’s going on in the industry? Are you confused about what to focus on and how to best apply your knowledge?


If you can relate to any of the above, this blog post is for you!


These days, the lines between job functions are growing increasingly blurrier. New topics and technologies are constantly evolving, so how do you stay ahead, or at least on top of your game?


First, understand that staying ahead in the industry is a never-ending journey, which means you will never be “done.” You will reach some career milestones, but there is no end to reach.


Second, pick one area of interest in which you would like to improve. For me, that’s network automation, or learning to develop software to assist with daily tasks. Don’t start with a huge topic; keep it manageable. For example, I would not start out with “data center” and try and understand everything under that huge umbrella. Instead, you could start out with: “Data center to WAN connectivity,” which narrows it down quite effectively. The former would take several months and a reading list of 5-10 books, whereas the latter boils it down to maybe three to four weeks and a reading list of one or two books.


After you have picked an area, be very single-minded in your focus. Don’t jump back and forth between different tracks that you would like to improve on. This will only make you feel discouraged because you’ll likely feel you aren’t making any progress. This discipline has helped me to focus and improve certain skills much faster than if I would have done a bit of this and a bit of that randomly.


This approach also has the side effect of helping to sort out your mind, so to speak. You will feel more in control, which will help reduce the nagging sensation you get when you are feeling lost.


Through my experience in IT, there are two truly effective ways of choosing a topic to study. First, pick a topic that is related to something that you already do on a regular basis. The advantage of this is that you will actually get to use your improved skill for something practical. At some point, it might even free up some time to start on my next suggestion.


Once you feel you’ve improved enough in one area of interest, pick a topic that you would like to extend yourself into. This will keep your skill set sharp so that you will be prepared for the next big thing coming on the horizon. It also has the added benefit of being something you are really passionate about, which will make it easier to fully focus on it.


Now that you have picked out something to improve on, how do you actually go about it? Well, as mentioned earlier, single-tasking can be quite helpful in IT. But before you get to that, I would suggest you map out some time slots during the day that is specifically dedicated to study. If you don’t have time during your normal work day, try and do it before you leave for work in the morning.


If you have issues with procrastination, try and start really small. Try reading for just 5-10 minutes. It might not be much, but it’s better for your career than spending that time sleeping.


Keep a journal or a list of your progress. I note what I have studied and for how long in my calendar. This has the benefit of providing me with more motivation when I look back and see how I’ve performed during the week.


Also, try and mix things up if you get stuck doing only one thing (reading, for example). Shake it up by watching some videos on the topic instead.


Finally, to gain IT experience, I would advise you to read popular blogs, news sites, Twitter®, etc. Just dive into the ones you find relevant to your chosen topic. This will help create a mental picture of what’s going on in the industry so that you’ll have them fresh in your mind. You have to be very careful not to take in too much information that is irrelevant to your current topic. By all means, be curious about topics that relate, but be mindful of your mental bandwidth.


I hope this information will give you a sense of purpose and rejuvenation in your professional life. I know it’s a system that has worked for me, so I trust others will be able to use it as well.


If you have any questions, feel free to reach out and I will do my best to help!


Thanks for reading!

For a lot of organizations, moving to Office 365 might be one of the first, or possibly biggest, migrations to a public cloud. There can be a lot of stress and work that goes into the process, with the hope that the organization will reap the benefits. But how can you be sure that you are, in fact, getting the most out of Office 365?




We typically don't carry out IT projects just for the heck of it. Enterprise IT departments almost always have a backlog to deal with, so deploying new technology just for the sake of it isn't very high on the list. Rather, most priorities are ordered by the problems that they solve or the value that they bring. Moving to Office 365 is no different. If you find yourself looking at making the move, hopefully, you have a list of perceived value it will bring.


Sitting down with business leaders is a great starting point. Ask them what--if any--pain points they have. Maybe it is the lack of availability. Do they always need to be using a corporate-issued laptop to access email or work documents? Would SharePoint online solve that? Another common complaint that I have seen is older software. Sure, 90% of the features from Word 2003 are the same in Word 2016, but that doesn't mean everyone wants to use 13-year-old technology. In some cases, it can even be for perception. If a salesperson shows up to close a big deal and they are running Office 2003, how would that look? They certainly would not come off as a cutting-edge company. Subscription-based licenses from Office 365 can solve this and ease the burden of managing spreadsheets full of license info for IT departments.




You've decided that the move makes sense. Great! What challenges do you foresee? This step is critical as there is almost always some cost associated with it. It might be soft costs, such as time from your salaried IT department. Or it might be hard costs; maybe you are looking at performing a hybrid installation and you'll need to increase your bandwidth costs.


How about regulations? Do you need to make sure some data stays on-premises (i.e. financial data) or is it all safe to move to SharePoint? If the former, how do you plan to track and enforce compliance? There are tools built into Office 365 for compliance and security, but will it be a challenge to get IT staff trained on them?


Another common challenge is user training. Lots of options exist for this, ranging from Microsoft-provided materials to possibly doing lunch and learn sessions with groups of employees over time. As most folks who have help desk experience in IT know, sometimes a small number of users can account for the majority of support time.




Now that you know what value you will be gaining, and the potential challenges, you need to do some math. Ideally, you can account for everything (monthly costs, infrastructure upgrades, lost productivity, etc.). Even better if you can assign dollar figures to it. Once you have that, the decision should become easier. Are you saving money? If so, how long will it take to reach your ROI? Are you going to end up spending more on a monthly basis now? Is the value worth it? Maybe your sales staff will be more agile and close more deals, increasing revenue and profit.


This is by no means a comprehensive list for such a big project, but it should be a good starting point. Do you have any tips to share? Maybe you've run into some unexpected issues along the way and can share your experiences. Feel free to leave comments below!


This past Sunday was Data Privacy Day. I'm guessing most of you didn't know such a day existed. It would seem that the first rule of Data Privacy Day is we don't talk about Data Privacy Day. Here's hoping we can get those rules changed before next year.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Hating Gerrymandering Is Easy. Fixing It Is Harder.

Because I enjoy when data is used to help explain the problem and to explain why there is no perfect solution.


Thieves Jackpot ATMs With ‘Black Box’ Attack

ATM machines aren’t known for being up to date. I’m surprised that this exploit isn’t more widespread.


Windows 10 can now show you all the data it’s sending back to Microsoft

Just in time for Data Privacy Day, Microsoft is letting users know details about the telemetry they collect.


Tracking app Strava reveals highly sensitive information about U.S. soldiers’ location

Also just in time for Data Privacy Day, Strava shows that neither they nor their users understand the dangers involved with something as innocent as tracking a jogging route.


Cybersecurity should be a boardroom topic, so why isn’t it?

Money, mostly.


Personal Data Representatives: An Idea

I think if the author mentioned the word “blockchain” in this article he would already have funding for this wonderful idea that I do hope happens in my lifetime.


Burger King Trolled Customers to Perfectly Explain Net Neutrality

In case you haven’t seen this video yet, enjoy.


This is how I celebrate Data Privacy Day, by drilling holes into my old hard drives:

By Paul Parker, SolarWinds Federal & National Government Chief Technologist


Security is always an important topic with our government customers. Here's an applicable article from my colleague, Joe Kim, in which he offers some tips on compliance.


Ensuring that an agency complies with all of the various standards can be a job in itself. The best strategy is to attack the challenge on three fronts. First, proactively and continuously monitor and assess network configurations to help ensure that they remain in compliance with government standards. Second, report on their compliance status at any given time. And third, beef up their networks with rock-solid security and be prepared to quickly remediate potential issues as they arise.


Automate network configurations


One of the things agencies should do to remain in compliance with the RMF, DISA STIGs, and FISMA is monitor and manage their network configuration status. Automating network configuration management processes can make it much easier to comply with key government mandates. Device configurations should be backed up and restored automatically, and alerts should be set up to advise administrators whenever an unauthorized change occurs.


Be on top of reporting


Maintaining compliance involves a great deal of tracking and reporting. For example, one of the steps in the RMF focuses on monitoring the security state of the system and continually tracking changes that may impact security controls. Likewise, FISMA calls for extensive documentation and reporting at regular intervals, along with occasional onsite audits. Thus, it is important that agencies have easily consumable and verifiable information at the ready.


The reporting process should incorporate industry standards that document virtually every phase of network management that could impact an agency’s good standing. These reports should include details on configuration changes, policy compliance, security, and more. They should be easily readable, shareable, and exportable, and include all relevant details to show that an agency remains in compliance with government standards.


Catch suspicious activity and automate patches


Agency IT administrators should also incorporate security information and event management (SIEM) to strengthen their security postures. Like a watchdog, SIEM alerts for suspicious activity and alerts when a potentially malicious threat is detected. The system can automatically respond to the threat in an appropriate manner, whether that is by blocking an IP address or specific user, or stopping services. Remediation can be instantaneous and performed in real-time, thereby inhibiting potential hazards before they can inflict damage.


Implementing automated patch management is another great way to make sure that network technologies remain available, safe, and up to date. Agencies must stay on top of their patch management to combat threats and help maintain compliance. The best way to do this is to manage patches from a centralized dashboard that shows potential vulnerabilities and allows fixes to be quickly applied across the network.


Following the guidelines set forth by DISA®, NIST®, and other government acronyms can be a tricky and complicated process, but it does not have to be that way. By implementing and adhering to these recommended procedures, government IT professionals can wade through the alphabet soup while staying within these guidelines and upping their security game.


Find the full article on our partner DLT’s blog Technically Speaking.

Let's normalize what GOAT stands for. In this instance, it stands for the greatest of all time. The GOAT Doctrine states that IT professionals, regardless of experience and expertise, should always be moving forward. Such a concise, vague, and all-encompassing statement, yet so profound in its meaning for a true professional.


The only constant in IT and technology is change. That change sometimes seems circular as old technology morphs into new technology that transforms back into a reincarnation of old technology. Other times, this change is reflected as people flow in and out of our professional purview. This change flows in with more velocity, volume, and variety. What can you do to survive and thrive in your IT career?


Simple. Aim to be the GOAT you. GOATs continue to adapt, evolve, and most importantly, move forward. They move forward to gain new experience and expertise while leveraging the best of what they already have. To be a GOAT, you have to beat the GOAT. That GOAT is the past you. So take up that challenge in this New Year and become the better GOAT version of yourself.


In the comment section below, share your GOAT stories as you power leveled yourself and became a better GOAT you.

Flood of Kobolds vs. Epic Boss Fight

I had a group where I was running a dragon-themed adventure.  Before you gasp and say mockingly, “In Dungeons & Dragons, there are actually dragons?” let me tell you that that was just thematic (as far as my players knew).


Part of this was making Kobolds be in a frenzy because they were acting on the orders of their dragon overlord.  For those unaware, Kobolds are the cannon fodder of the D&D world.  The party was repeatedly beleaguered by hordes of them with staged battles (archers then infantry then mages).  Long story short, each battle felt like a big fight with the difficulty ramping up each time.  Unfortunately for my players, these were just the opening volleys.

The big battle was against a young dragon.  Don’t let the word “young” distract you from the word “dragon.”  This wasn’t an easy fight.  Sadly, it was nearly a total party kill (TPK) for the group because they got overconfident based on their previous encounters.  Confidence is great – overconfidence less so.

I’ve seen the same thing in IT.  Technicians get cocky thinking that they know everything and bite off more than they can chew.  This is typically something that green IT people do, but they don’t have exclusive rights to this failing.  The thought that “in a perfect world it will go just like this” falls on its face when you realize that there’s no such thing as a perfect world.  New IT people sometimes don’t have the good sense to think, “maybe this isn’t just one-degree harder than the previous thing I did.”  If there’s a lesson to be learned here, it’s that you cannot win every fight with the same skills you already have.  Learn new ones and evaluate your scenarios before rushing in.


We get to the final question here, “Has running a D&D game actually provided me with any life skills?”  The answer is a resounding “Yes!”  So much so, that I would have no issue putting a few things on my professional resume.

·         Meet with peers for scheduled creativity and conflict resolution exercises

·         Assisted multiple people with gathering experience for both character and skill development

  Learned to quickly assess situations and collaborate on best solutions

Can you have too much of a good thing? Maybe not, but you can certainly have too much of the wrong thing. In my first blog, I introduced the idea that Microsoft event logging from workstations can be a simple first step to building a security policy that looks beyond the perimeter. The simplicity comes from the fact that event logging is part of a workstations OS, so no need to acquire additional applications or agents.


As many of you commented, the more difficult part is filtering out the stuff you don’t need. Fortunately, this is also the fun part.


By creating focused filters and sets of important events, the admin is required to understand what different event IDs mean and how they relate to the overall security of the organization. Being proactive and reactive to threats requires knowledge and at times creativity. You also need to monitor and tune your workstation event log policies to ensure they are providing the appropriate level of coverage whilst not overwhelming a system from a performance standpoint.


So what are the must-haves when it comes to event logs?  Here are some of the critical ones for a workstation (non-server) policy.


  • NEW PROCESS STARTING:  4688 - logs when a process or executable starts.
  • USER LOGON SUCCESS:  4624 - tracks successful user logons to the system.
  • LOGIN FAILURES:  4625 - can be used to detect possible brute force attacks
  • SHARE ACCESSED:  514 - helps track user access to file shares.
  • NEW SERVICE INSTALLED:  7045 - will capture when a new service is installed.
  • SERVICE STATE CHANGES:  7040 - can indicate a service has been modified by a hacker
  • NETWORK CONNECTION MADE:  5156 - works with Windows Firewall to log the state of a network connection (source, destination, ports and process used).
  • FILE AUDITING:  4663 - logs an event when new file is added, modified or deleted.
  • REGISTRY AUDITING:  4657 - will capture when a new registry item is added, modified or deleted.
  • WINDOWS POWERSHELL COMMAND LINE EXECUTION:  500 - will capture when PowerShell is executed and log the command line.
  • WINDOWS FIREWALL CHANGES:  2004 and 2005 - will capture when new firewall rules are added or modified.
  • SCHEDULE TASKS ADDED:  106 - will capture when a new scheduled task is added.
  • USER RIGHT ASSIGNMENT:  4704 - documents a change to user right assignments on this computer and will show privilege escalations.
  • USER ACCOUNT CREATED:  4720 - tracks accounts created on the local machine.
  • USER ACCOUNT DELETED: Event Code 4726 - alerts if a local account is removed.
  • USER ACCOUNT LOCKOUT: Event Code 4740 - reports accounts locked due to failed password attempts.


The threat landscape is always changing. Attacks have a lifecycle and in the end they are either remediated by our virus and malware products, or they morph into the next big thing.

Once you have an event policy in place, it’s important to monitor the latest security issues in order to assess the applicability of your policy.


A few points to consider are:

  • Does your event set still help detect and remediate attacks?
  • Has there been a change to endpoint configuration (a new agent for example) that may be creating duplicate event logs?
  • As the OS changes over time, are new events available that should be introduced?
  • Should I tune my parameters or deprecate event IDs that are no longer relevant to avoid taking up space?
  • Are there compliance mandates that dictate a change in policy?


The moral of the story is, when it comes to security, you are never really done.


In my next post, we’ll take a look at building an overall strategy for creating filters and deploying your event log policies. We'll also identify some events that are worth tracking relating to specific applications and services.


One great reference for Windows Event IDs:

Short-Circuiting your Adventure

I’ve noted that preparation and planning are some of the hallmarks of both a good DM and IT professional, but sometimes planning gets short-circuited.  In the past, I’ve spent hours working on a campaign with intricate details, a slowly building storyline, interesting character interactions, and a clear path from point A to B to C and so on.

Then we sit down to play this epic tale, and the players choose to follow the white rabbit instead of the obvious path in front of them and jump immediately to point J.  That’s it.  A small change and they’ve completely gone off course.  In the past, I’ve had this go one of two ways: the group takes a petty diversion and makes it more than it’s supposed to be or they’ve jumped ahead in the story so much that I don’t have anything else planned.  As the DM, you can either throw up your hands and walk away, force them back on the “right” path, or see where the adventure leads.

One of those avenues reminds me of scope creep in IT Projects.  You’ve outlined everything in a beautiful waterfall plan and the team starts taking side-trips and tacking on new requirements.  The adventure of discovery is upon you!  Keep going!  Let’s see where this will end up.  Will planning to only swap out a power supply lead to the adventure of replacing all the UPS’s in a data center?  Who knows?  Mystery abounds.

The clear answer here is “within reason.”  You’ve planned one change and now people are adding more and more to that change request.  Where do you draw the line?  I can’t answer that for you, because it’s different for every scenario, but you should be open to change.  Embrace it where you can and push it off where you cannot.

The flip side of this scope creep is having your entire plan thrown out and being asked to “skip to the end.”  Just like in D&D this leaves you asking, “what’s next?” and not having a clue.  Maybe you’ve only planned for five steps and you need to jump right to step six.  Can you plan for this?  Probably not.  About the only thing you can do it plan for unexpected.

Looking back, the same solution presents itself for each problem: plan for the unexpected.  This applies to the players, the DM, and the IT Professional.  There’s always going to be another tree in the forest, and there might be a goblin archer behind one.  Plan for these diversions, but don’t let them pull you from your goal.

Business services and infrastructure services have divergent interests and requirements: business services are not focusing on IT. They may leverage IT, but their role is to be a core enabler for the organization to execute on its business strategy, i.e. delivering tangible business outcomes to internal or external customers that help the organization move forward. A business service could focus on the timely analysis & delivery of market data within an organization to drive its strategy, another business service could be to allow external customers to make online purchases.


Infrastructure services will instead focus on providing and managing a stable and resilient infrastructure platform to run workloads. It will not necessarily matter to the organization whether these are running on-premises or off-premises. What the organization leadership expects from infrastructure services (i.e. IT) is to ensure business services can leverage the infrastructure to execute whatever is needed without any performance or stability impact.

Considering that the audience is very familiar with infrastructure services, we will focus the discussion here on what business services are and what makes them so sensitive to any IT outages or performance degradation.


Business services, while seemingly independent, are very often interconnected with other organization IT systems, and sometimes even with third-party interfaces.  A business service can thus be seen (from an IT perspective and at an abstract level) as a collection of systems expecting inputs from either humans or other sources of information, performing processing activities and delivering outputs (once again either to humans or to other sources of information).


One of the challenges with business services lies with the partitioning of its software components: not everybody may know the “big picture” of what components are required to make the entire service/process work. Within the business service, there will be usually a handful of individuals who’ve been around long enough to know the big picture, but this may not always be properly documented. The impossibility, inability or even lack of awareness that upstream and downstream dependencies of an entire business service must be documented properly is often the culprit to extended downtimes with laborious investigation and recovery activities.


In the author’s view, one of the ways to properly map the dependencies of a given business service is to perform a Business Impact Analysis (BIA) exercise. The BIA is interesting because it covers exactly the business service from a business perspective: what is the financial and reputational impact, how much money would be lost, what happens to employees, will the organization be fined or even worse have its business license revoked?


Beyond these questions it also delves down to identifying all of the dependencies that are required to make the business service run. These might be the availability of infrastructure services and qualified service personnel, but also the availability of upstream sources such as data streams that are necessary for the business service to execute its processes. Finally, the BIA also looks at the broader picture. If a location is lost because of a major disaster, perhaps it makes no longer sense to “care” about a given business service or process, when priorities have now shifted somewhere else.


Depending on the size of the organization, its business focus and the variety of business services it delivers, the ability to map dependencies will greatly vary. Smaller organizations that operate in a single industry vertical might have a simplified business services structure and thus a simpler underlying services map, coupled with easier processes. Larger organizations, and especially regulated ones (think of the financial or pharmaceutical sectors, for example), will have much more complex structures which impact business services.


Keeping in mind the focus is on business services in the context of upstream/downstream dependencies, complexities can be induced by the following:

  • organizational structure (local sites vs. headquarter)
  • regulatory requirements (necessity to take into account in business processes the requirement to provide outputs to their regulatory body)
  • environmental requirements - production processes depend on external factors (temperature/humidity, quality grade of raw materials, etc.)
  • availability of upstream data sources & dependency on other processes (inability to invest if market data is not available, inability to manufacture drugs if environmental info is missing, inability to reconcile transaction settlements etc.)


In these complex environments, the cause of a disruption to a business service may not be immediately evident and therefore an adequate service mapping will help, especially in the context of a BIA. Needless to say, it may not always be an easy walk in the park to get this done, especially if key members in the organization which were the only ones to understand the full context are gone. It might even be much worse in the case of a disaster or an unfortunate life incident (the author has experienced this in at least two organizations).


What about IT / infrastructure services, and how can they help with the challenges of business services? It would be wrong to assume that IT is the panacea to all problems and the all-seeing-eye of an organization. There is however a tendency to assume that because business services execute on top of infrastructure services, IT has an all-encompassing view of which application servers are interacting with which databases, and this leads organizations to believe that only IT can fully map a business service.


The belief holds partially true: IT organizations that leverage advanced monitoring solutions are able to map a majority of infrastructure/application dependencies and view traffic flows between systems. In our view, these solutions should always be leveraged because they drastically improve the MTTR (Mean-Time-To-Resolution) of an incident. Nevertheless, in the context of a BIA and of the business view of services, we believe that while IT should definitely be a contributor to business service mapping, it should not be the owner of such plans. The full view on business services requires the organization not only to incorporate IT’s inputs, but also to gather the entire process flow for any given business process, to understand which inputs are required and which outputs are provided, as those may not always end in an handshake with an IT infrastructure service process.

Had a great time in Austin last week despite the ice storm. It's been a while since I've had to scrape ice from a windshield with my bare hands. I'm glad that my New England survival training skills are still intact. 


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Google Memory Loss

Just in case you were worried about the loss of net neutrality leading to censorship, here's a little piece to remind you that censorship comes in many forms. Google doesn't care about helping you find data, they only care about getting the right ads delivered to your browser.


Google Cloud is Adding 5 New Data Centers, Rolling Out 3 New Subsea Cables

Am I the only one that read the title and thought "Google has a cloud"? OK, I *know* they have a cloud, and I like seeing that they are investing in expanding their reach. But this article makes it clear that Google is far behind AWS and Azure, and that it is not likely we are going to see anyone make the necessary investment to compete with those three (no, not even Oracle).


Internal documents reveal that Whole Foods is leaving some shelves empty on purpose

I guess this is what happens when someone that runs a bookstore thinks they can run a grocery store, too.


Jason's Deli: Hackers Dine Out on 2 Million Payment Cards

Because it's been a while since I reminded you that everything is awful, and that corned beef and swiss cost you more than you thought.


Intel Confirms Fresh Spectre, Meltdown Patch Problems

Everything Is Awful™


Throne AI is the Kaggle of Sports Predictions

OK, maybe it's just me, but I think this idea is fantastic. It reminds me of how my interest in Fantasy Football 20+ years ago paved the way for me to take a deep interest in data, databases, web development, and applications. The gamification of predictive models is brilliant, because right now there is someone out there, starting out with machine learning, and will use this as a starting point for their career. And I think that's awesome.


Social media is making you miserable. Here's how to delete your accounts.

Here's a list to help remove yourself from the major social media websites. In case, you know, you've thought about doing that once or twice already.


Dear Austin Hotel: I appreciate the effort with the sand last week, but this isn't how you spread sand, and beach sand isn't as helpful as you might think:


Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.