1 2 3 Previous Next

Geek Speak

2,583 posts

By Paul Parker, SolarWinds Federal & National Government Chief Technologist


Blockchain is already one of the top five most important technologies in the IT strategy of 12% of surveyed public sector employees, according to the recent IT Trends Report from SolarWinds. The U.K. government is heavily encouraging blockchain-based technologies, with a £19 million investment in innovative product or service delivery projects.


The promise of blockchain lies in how it can help accelerate the verification processes using many connected computers to store blocks of information. Blockchain is transparent by design, allowing data to be shared more easily between buyers and sellers.


Blockchain has the potential to revolutionize the way government agencies acquire services and solutions, but, as the financial world has discovered, network monitoring and management strategies play a critical role in blockchain’s success within public sector organizations.


Distributed network monitoring and visibility


The success of blockchain in procurement is dependent on a high throughput of transactions and low latency. Unfortunately, those goals can be difficult to achieve over a disparate network. In addition, according to the SolarWinds IT Trends Report, 58% of public sector IT professionals surveyed felt their network was not working at optimum levels.


On-prem and hybrid network infrastructures are highly distributed. Teams need to be able to monitor data as it passes between all of these services to help ensure that their networks are operating efficiently and dependable. The best way to get this insight is by monitoring strategies that are designed to provide access and visibility into the entirety of the network, wherever it may exist.


Resilient, but not impervious


Blockchain technology has been suggested as potentially more secure than alternatives, if

used correctly. This is due to its decentralized nature, which can make it a harder target for hackers to hit. Agencies must still make sure that they are maintaining the same high level of security practices they would do otherwise.


It is also important to remember that blockchain is a relatively new technology. As such, there may be vulnerabilities that have not yet been exposed. At this very moment, it is likely that many hackers are attempting to identify and exploit blockchain vulnerabilities. Maintaining a sound security position can help agencies fortify themselves against those efforts while taking strides to improve their procurement processes.


Innovation beyond the procurement process


Blockchain has considerable potential for the public sector in the U.K. It has been shown to be innovative and powerful in other industries and could very possibly revolutionize government procurement processes in the near future. However, this is only the start of the potential blockchain revolution. The same technology could work to track government loans and spending, protect critical infrastructure, or even help to deliver on the government’s foreign aid commitments in a

more secure and transparent way.


Success with blockchain, though, is contingent on supporting the technology with comprehensive network management. Clear visibility across all nodes and management of performance levels will be integral to helping maintain security and preventing blockages in the network. Only then can blockchain and distributed ledger technology successfully transform government digital services.


Find the full article on Open Access Government.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

There is a traditional market research technique called voice of the customer (VoC). Many people are familiar with this process, which involves surveys, interviews, and even watching customers interact with your products. Some companies use customer advisory boards (CAB) to collect feedback, too.


There’s no shortage of ways to get feedback from customers. The #hardtruth is that many of the ways are cold and unfeeling. It can take years for feedback to work its way into a product. I find that many companies talk about valuing customer feedback but fall short of connecting with their customers in a meaningful way.


When I joined Confio in 2010, my perception of the VoC process changed. It was there that I understood how it should work. Shortly after coming on board, I provided feedback regarding the monitoring of databases running inside of VMware. Have a look at this screen:



That stacked view, showing metrics inside the engine, the guest, the host, and storage? That’s what I told our dev team a DBA needed to see. Seven years later and that view still stands out when giving demos. The annotations were also something I requested. I wanted DBAs to know if there was an event on the host for that VM.


I still remember the feeling I had when I saw my feedback was used to make our product better. Imagine if a customer had the opportunity to do the same.


Well, imagine no more! The SolarWinds Usabili-buddy program is your opportunity! Go here to sign up for the program.


Usabili-buddy Example

Here’s an example of how the program has had a positive impact for everyone.


While working on the upgrade to alerting, the UX team was working with customers and gathering feedback. While there was already a roadmap and screens of how the UI for alerting would look, these customers noticed a gap in the feature. The customers recalled experiences in the past where they had misconfigured an alert and accidentally triggered an alert storm.


These customers wanted a way to avoid spamming end users due to a misconfigured alert. As a group, they came up with the feature below, on their own



I know, you can't see that, here is a closer look:



This was NOT on the original roadmap, but product management loved this idea. It was included in the very next release. 


Other companies talk about listening to their customers. We don’t just talk the talk. You can see the impact that the Usabili-buddy program and UX team has had over the years.


At SolarWinds, we’re listening. We know that we’re all in this together.


We don’t treat customers as revenue. We build relationships with them.


When you become a customer, you are a member of a community, not just a number in Salesforce.


Help Us, Help You, Help Them


We have 55 products now.


Managing many products is a challenging task. But we are fortunate that our user community makes it easier. The quality of customers we have allows for better feedback, and better feedback leads to better products. Thank you for giving us your time, helping to make the products better for the next user.


But they aren’t our products. They are yours. We just maintain the source code.


Usability improvements are part of every release, and the UX team meets with product development on a daily basis. In each product cycle, UX plays an important role. It's a cycle of continuous improvement, and a project that is never done.


At the end of the day we all want the same thing: happiness. We want happy customers, enjoying long weekends, without worry.


If you have an idea, or just want to help, join the program and share.


Help us, help you, help them.

At what point does the frequency and volume of “it will only take a second to change” become too much to bear and force us to adopt a network automation strategy? Where is the greatest resistance to change? Is it in the technical investment required, or is it the habit of falling back to the old way of doing things because it's "easier" than the new way?


The Little Things


We all have those little tasks that we can accomplish in a heartbeat. They're the things we've done so many times that the commands required have almost become muscle memory, taking little to no thought to enter. They're the easy part of our jobs, right? Perhaps, but they can also be the most time consuming. There's a reason those commands have become so ingrained. We perform them far more than we should, but haven't necessarily figured that out yet... well, not until now anyway.


The solution? Network automation! Let's get all of those mind-numbingly simple day-to-day tasks taken care of by an automation framework so that we can free ourselves up for work that's actually challenging and rewarding. It's that easy! Or is it?


The Huge Amount of Work Required to Avoid a Huge Amount of Work


Automation, even with the best of tools, is a lot of work. That process itself is something that we will wish could be automated before we're done. There's the needs analysis; the evaluation and selection of an automation framework; training of staff to use it; and the building, documentation, and maintenance of the policies themselves.


When a significant portion of the drive for automation comes from overload, the additional technical workload of building an automation framework in parallel to the current way of doing things can be daunting. Yes, it's definitely a case of working smarter rather than harder, but that's still hard to swallow when we're buried in the middle of both.


The Cultural Shift


People are creatures of habit, especially when those habits are deeply ingrained. When all of those little manual network changes have reached the point that they can be done without real thought, we can be absolutely sure that they're deeply seated and aren't going to be easy to give up. The actual technical work to transition to network automation was only half of the challenge. Now we have to deal with changing people's thinking on the matter.


Here's a place where we really shouldn't serve two masters. If there isn't full commitment to the new process, the investment in it yields diminishing returns. The automation framework can make network operations far more efficient, but not if everyone is resistant to it and is continuing to do things the old way. There needs to either be incentive to adopt the new framework 100% or discouragement from falling into habitual behaviour. This could even represent a longer process than the technical side of things.


What Must Be Done


Neither the technical hurdles nor the human ones remove the ultimate need to automate. The long-term consequences of repeatedly wasting time on simple tasks, both to individuals' technical skills and job satisfaction and the efficiency of the organization, makes a traditional approach to networking unsustainable. This is especially true at any kind of scale. Growth and additional workload only serve to make the problem more apparent and the solution more difficult to implement. Still, there's no question that it needs to be done. The real questions revolve around how best to handle the transition.


The Whisper in the Wires


It's difficult to say what's harder, the technical transition to network automation itself, or ensuring that it becomes the new normal. By the time we reach a point where it becomes necessary, we may have painted ourselves into a corner with a piled-up workload that should have been automated in the first place. It also represents a radical change in how things are done, which is going to produced mixed reactions that have to be factored in.


For those of you who have automated your networks, whether in large installations or small, at what point did you realize that doing things the old was no longer a viable option? What did you do to ensure a successful transition both technically and culturally?

In part 1 of this series, we covered some of the most prevalent and most promising cybersecurity models and frameworks available today. These are all tools that can help you determine the size and shape of the current information security landscape, and where you and your organization are within it. We also realized that even with all of this, you still can’t answer some fundamental questions about the specific technology you need to protect your digital infrastructure. As promised, I’m going to spend the next four posts covering the four critical domains of IT infrastructure security and the categories they each contain. Let’s start today with the perimeter.


Domain: Perimeter

The perimeter domain can be seen as the walls of a castle. These technologies are meant to keep information in and attackers out.  In many cases, a Demilitarized Zone (DMZ) and other public network services are exposed to the routable internet via systems within the perimeter domain. Additionally, an organization may have multiple perimeters, similar to an outer wall and an inner wall protecting a castle.


The categories in the perimeter domain are network security, email security, web security, DDoS protection, data loss prevention (DLP), and ecosystem risk management.


Category: Network Security

Network security is typically the primary line of defense for traffic entering or leaving an organization’s network, providing a first-look analysis of traffic inbound and a last-look at traffic leaving your network’s span of control. The primary products in this category are firewalls, network intrusion detection/prevention systems (IDS/IPS), deep packet inspection (DPI), and other security gateways. Today, we rely on so-called next generation firewalls (NGFW) to package the functionality of what used to be many devices into a single appliance or virtual machine. More and more we are facing the challenges of deperimeterization as BYOD and cloud services stretch and blur the previously hard lines that defined our networks' boundaries. This is leading to the rise of software defined perimeter (SDP) tools that push security to the very edge of your new multi-cloud network.


Category: Email Security

Email has become a nearly universal communication medium for individuals and businesses alike, which also makes it a prime attack vector. Spam (Unsolicited Commercial Email - UCE) has been a nuisance for many years, and now phishing, click-bait, and malware attachments create real organizational threats. These attacks are so prolific that it often makes sense to layer email-specific security measures on top of network and endpoint solutions. Included within this category are email security products that offer antivirus, anti-spam, anti-phishing, and anti-malware features. Additional tie-ins to DLP and encryption are also available.


Category: Web Security

Much of our online activity centers around the web. This is increasingly true in our more and more SaaS-focused world. Web security seeks specifically to protect your users from visiting malicious websites. URL filtering (whitelist/blacklist) and other DNS tools fit into this category. Today, known and emerging threats are addressed within this category using Advanced Threat Protection (ATP) capabilities to analyze, diagnose, and dynamically implement rules governing web access in real-time.  This capability is typically provided using a subscription service to a threat database that has an influence on data exchange or name resolution traffic traversing a network.


Category: DDoS Protection

Pundits and others spend a lot of time talking about “going digital.” What this likely means to you is that internet access is crucial to your business. Your employees need to reach the information and services they need, and your customers need to reach your website and other applications. Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks generate malformed/malicious packets or an excessive amount of inbound traffic to flood systems responsible for responding to valid queries.  Under such an attack, systems are unable to keep up with responses. D/DoS protection services recognize these attack techniques and implement methods to block the attempts or clean the inbound data streams so that only the valid traffic remains.


Category: Data Loss Prevention

Data is the new gold. Your intellectual property is now made up of ones and zeros, so you can’t lock it in a file cabinet or a safe. You can still protect it though – probably better than you could when it was on paper. Data loss prevention (DLP) tools classify, analyze, and react to data at rest, in use, or in motion. DLP ensures that your data remains available to those who need it, and out of the hands of would-be attackers.


Category: Ecosystem Risk Management

Your cybersecurity is only as strong as the weakest link in your ecosystem. A vulnerability anywhere in the supply chain escalates organizational risk and jeopardizes productivity, profitability, and reputation. Partner, supplier, and vendor security risk is a major area that cannot be ignored as a business issue any longer. You need to be able to continuously identify, monitor, and manage risk to improve the cyberhealth of your vendor ecosystem.


Up Next

Obviously, the castle walls are only one part of a well-crafted defense. In the next three posts of this 6-part series, we’ll cover the remaining domains of endpoint & application, identity & access, and visibility & control. In the final post, we’ll look at the full model that these four domains create, how it fits into the broader cybersecurity landscape, and provide some advice on how to put it all into practice. Stay tuned!

What Is It All About?


You may or may not have heard of application performance monitoring or APM. You may also have heard the term APM used in the context of application performance management. These two should not be interpreted to be the same. Application performance monitoring is exactly what is in the name: monitoring. It is all about monitoring the health of the application and external constructs. Application performance management is exactly what is in its name: management. Application performance management is all about the awareness and focus of the application. Generally, you will find application performance monitoring as a subset to application performance management tooling, but not in every case.


Are you confused yet?


In this post, we will be discussing APM from the monitoring perspective. Over the next series of posts, we will touch on various aspects of APM, such as what components in the environment we should monitor to ensure a healthy application state, as well as the components of the application that should also be monitored. Ultimately, APM should provide a satisfying user experience. We will also be looking at effective event management, alerting, and dashboards. Also, to keep things in perspective, we will first explore what I would term as “after the fact implementation” of application performance monitoring in a traditional fashion. In a later post we will explore implementing application performance monitoring in an agile fashion.


So, to sum up and answer the question “What is it all about?” it is all about the monitoring of an application's health, which includes various components that can affect the application’s health and performance. At the end of the day, we need a way to identify where the performance degradation is, and how we can swiftly resolve the issue. Having the ability to do this efficiently manner is the ultimate win for everyone. We will also be better prepared for the ultimate question, “Why is my application so slow?”


How Do We Get Started?


One of the most challenging aspects to solve when it comes to APM is “How do we get started?” This question is so challenging due to the fact that we must first identify all of the components that can cause an application to become unhealthy, therefore causing performance degradation. One might think that because it is application-focused that we might only look at the application itself which might only include the server, container, etc. in which the application is running. Only including these items would cause us to overlook the external components such as load balancers, databases, caching layers, hypervisors, container platform, and more. Because of these additional layers, we may actually experience application performance issues not from the application itself, but a result of an external component injecting the issue. So, to effectively identify all of the components, one must completely understand the overall architecture and ensure that each and every component is monitored. Remember, monitoring in this sense is more than just up/down, bandwidth, latency, etc. We must obtain enough information from all of the components and ensure that the data is correlated and aggregated to effectively pinpoint an issue when our applications performance is degraded.


As you can see, there are abundant areas that we should be monitoring which could in turn affect an application's performance. You may also be thinking to yourself that this is the way things have always been done over time. I would be willing to challenge the fact of whether or not they really have been, or better yet, how effective they have been over time.


Now is the time to get started and get a grasp on your true application performance.

This week’s Actuator comes to you from backstage at THWACKcamp. That’s right, I’m in Austin loading up on pork products and taking part in our annual two-day live event. When I am not on the livestream, I will be in the chat room answering any and all questions that I am able. Can't wait!


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


It’s alive! Scientists create ‘artificial life’ on a quantum computer

It's a simulation, i.e., it's code. But it's essentially the "Hello World" code for quantum computing. Something for us to build upon.


Are you a ‘cyberhoarder’? Five ways to declutter your digital life – from emails to photos

Data hoarding is a serious addiction, IMO. As a data professional I see the symptoms frequently. People need to learn to let go.


Life Is Dirty. So Is Your Data. Get Used To It.

Or, as I like to call it, “what happens when you allow stoners to perform data entry.”


Microsoft open-sources 60,000 patents to help Linux avoid lawsuits

Pretty sure a piece of adatole died when he heard this announcement.


Boston Dynamics’ Atlas robot can now navigate obstacles Parkour-style and it’ll haunt your nightmares

Just in time for Halloween: a robot uprising!


Cryptocurrencies Just Plummeted $13 Billion in Value Over the Course of a Few Hours

Relax folks, it’s only a loss on paper. And, to be fair, these currencies never had real value to begin with, unless you were in the extortion business, or needed a kidney.


Criminals used Bitcoin to launder $2.5B in dirty money, data shows

As I was just saying...


Let’s do this!


Good day!


My name is Paul Guido, but here on THWACK® I use the handle Radioteacher. Many years ago, I taught the beginning Amateur Radio – Technician test, so I created this specific handle to communicate with my students.


Working as an IT professional since 1993, I’ve had the privilege of using a number of different hardware and software platforms. Anyone remember 10base2 Ethernet or FDDI? For the past 20 years, I’ve worked at a financial institution in south Texas, working on systems, networks, monitoring, directories, storage, and virtualization. Throughout this work I always keep security top of mind.


I began by lurking on THWACK in mid-2011, but officially joined THWACK in June of 2012. My first meeting with the SolarWinds team was at VMworld 2012, where I acquired my first swag from the Blues Brothers themed booth—it was a cool, cloth Fedora hat. I also had lunch that day with two Head Geeks™, but at the time I had no idea who they were.


I have attended every THWACKcamp™ since 2013. In 2015, I was the only customer to attend in person at the SolarWinds HQ. During the 2016 and 2017 THWACKcamp events, it was fun to go to Austin and meet other THWACK MVPs from around the world.


The best MVP meeting at THWACKcamp 2017 happened at a restaurant in Bee Cave called Rocco’s. Rocco asked if, "You all worked at the same company?" The answer? Nope. "So, you all work at the same types of businesses?" Again, nope.


So, we told him, "All of us use the same software platform to monitor our unique networks for our companies." Rocco looked a little puzzled.


I’m looking forward to the “People Do Dumb Things: Why Security Is Hard for IT Pros” session on Day 2 at 10:00 a.m. CT. Destiny Bertucci, Mandy Hubbard, Sandy Hawke, and I will discuss security from different points of view. You probably already know Destiny, the SolarWinds Head Geek™ and my rambling posts on THWACK. Mandy Hubbard is a software engineer and QA architect, offering deep insights on how current software is made and tested. Sandy Hawke now works with companies to help with marketing, but also leads another life thanks to here two decades of information security experience. Sandy is basically a superhero.


Budget season is coming up, what is your team doing to improve the security posture of your company?


Security’s biggest enemy is complacency. Register. Attend October 17-18, 2018.



By Paul Parker, SolarWinds Federal & National Government Chief Technologist


Wary that the Internet of Things (IoT) could be used to introduce unwanted and unchecked security risks into government networks, senators last year created the IoT Cybersecurity Improvement Act of 2017, legislation that placed minimum security standards around IoT devices sold to and purchased by government agencies.


IoT and Edge: Hype vs. Reality


It’s good that provocative and important questions are being asked now, before edge computing and IoT truly take hold within the federal government. As it is, we are still at the start of their respective hype cycles, with true adoption hampered by security concerns.


Agencies are still grappling with BYOD security, let alone IoT or edge computing. The recent controversy surrounding fitness app Strava, which inadvertently revealed the location of classified military bases, made it abundantly clear that there is still much work to be done. Agencies are still trying to get past these fundamental hurdles before fully embracing IoT.


Agencies are still very much in the exploratory phase with edge computing. As such, it is unlikely we will see widespread adoption of these types of solutions over the next year.


Fortifying Current and Future Networks


Still, agencies are laying the infrastructure for these technologies and need to implement strategies to help ensure that their networks and data are protected. As such, there are several things IT professionals can do now to better fortify current and future operations.


  • Have a clear view of everything happening on your networks. If the IT team does not have the ability to accurately track and manage IP addresses and conflicts, domain names, user devices, and more, they will not be able to know if or when a bad actor is exploiting their networks. You must be able to tie events on the network directly back to specific users or events. This strategy also helps in evaluating the new devices on the network to confirm they are operating properly and securely.


  • Use trusted vendors. The IoT Cybersecurity Act of 2017 requires that vendors notify their customers of “known security vulnerabilities or defects subsequently disclosed to the vendor by a security researcher” or when a vendor becomes aware of a potential issue during the lifecycle of their contract.


  • Find the positive in potential intrusions. Intrusions can help IT pros evaluate and refine remediation strategies, and automated network security solutions can learn from the breach to offer protection for the future.


There’s every indication that IoT and edge computing will prove to be more evolutionary than revolutionary in 2018. Most agencies will likely continue to be cautious with these technologies, as the first consideration must be how IoT and edge computing devices will be managed and secured.


But the more agencies learn about these technologies, the more they will ultimately be adopted. Agencies must begin preparing for that day. The best way to do that is to implement strategies that can help them solidify network security today while laying the groundwork for tomorrow.


Find the full article on SIGNAL.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

My name is Will, and I hide in the shadows of the NOC. I’ve worked for an ISP/Telco for the past several years (13?) and spent nearly all that time in the NOC. I’m tasked with finding ways to keep our folks informed of the various events occurring throughout our environment.


When I started working in our NOC, this is what they had been running:


While I'm not 100% on my timeline, I'm pretty sure I took over our SolarWinds® environment somewhere around version 9 or version 10 of NPM. Previous management had gotten rid of it, having replaced it with an overly complicated product, which ultimately failed a couple years later. Shortly thereafter, said management was replaced. Afterwards, we were asked what product we wanted, to which we replied, "SOLARWINDS!!!" When asked why, it was a unanimous response. "Because it's easy to use, and is a very flexible product." Since I have been in charge of our SolarWinds environment, starting only with NPM, we have added many additional SolarWinds products, such as NCM, NTA, SAM, DPA, Kiwi Syslog®, Web Help Desk®, a couple of Engineer's Toolsets, a couple of additional polling engines, a couple of additional web servers, and most recently, VMAN.


THWACK® has become part of my everyday routine, though it did not start as such. In the beginning, I mainly had a bunch of complaints, as well as a few specific questions about how to resolve various problems in our network. From there, I found THWACK points, and FREE shirts. As time passed and new issues appeared at work, each requiring a different solution, I started to really get into this community. After asking several questions, and receiving numerous answers, it started to click with my brain. I became more and more interested and curious about the product(s). I thought well, if it can do this, I wonder if it can also do that, and more. Eventually, returning to the here and now, THWACK became an extremely important resource, not only for me to get answers, but also for me to help answer the questions of other THWACK users, as was done for me. It's seems amazing to me, looking back at the how and why of joining and using THWACK, and seeing myself go from one end of the spectrum to the other.


I’m looking forward to this year's THWACKcamp, as I have every year. I have attended THWACKcamp every year it has existed, both virtually, as well as the in-person invites too. The 2017 THWACKcamp was my favorite, as SolarWinds hosted several other THWACK MVPs in person, at their headquarters in Austin, Texas. Getting to meet everyone was great, but getting to talk shop with other folks using SolarWinds products, and seeing solutions through their points of view, was easily the best part.


In regard to the upcoming 2018 THWACKcamp: While there are tons of great sessions, I’m definitely looking forward to the "There's an API for That: Introduction to the SolarWinds Orion® SDK" session the most, which takes place on October 17 at 11 a.m. CT.


One of the best aspects of the SolarWinds products is the flexibility. You can open the box and use the modules as is, likely covering the needs of most. Or, you can customize your environment in countless ways. Maybe just start with a simple SQL query. Then evolve and convert that SQL query into a SWQL query. Well, don't stop there; you can use that SWQL query to automate some (all?) of your daily tasks. Over the past couple of years, I have been doing more and more work with the SDK/API side of the products. Ironically, my adventures into the Orion SDK/API world have some connections to some of the first questions I asked on THWACK. While I wasn't quite ready to take the leap back then, many of the questions I have asked on THWACK can easily be answered using the API to build the solution. Now, when I say "easily", I mean you don't need to be a master programmer coder extraordinaire. You simply need to be willing to take the time to think and ask questions. I think it's one of those "experience is something you get just after you need it" type of things.


Graphs have been a big thing for us for a long time. We need to graph way too many things to be building pages manually. And, while we're at it, we need a better way to manage all of our pages/views. Luckily, we have the Orion SDK/API to help. Each time I revisit these projects, we make a little more progress, and somehow find a way to evolve them to the next level. Here are a few links to some of the different ways we have used the SDK/API to bridge the gap. After THWACKcamp 2018, I'm expecting to have learned even more new ways to improve my projects.

Using PowerShell To Automatically Provision a Series of Graphs Per View

Export All Reports, In Bulk, Via PowerShell and API

Adding Custom Tabs to The Top Level Nav Bar

Custom SWQL Views Manager

Using Your Custom HTML Resource to Build A Better Way to Navigate Your Custom Views

Using Your Custom HTML Resource to Properly Display SWQL Query Results



While I have been working with SolarWinds products for many years now, having accrued a fair amount of experience, I continue to attend new training courses, as well as revisit updated courses as well. (Virtual Classrooms | SolarWinds Customer Portal) I also look for helpful videos (eLearning - SolarWinds Worldwide, LLC. Help and Support), SolarWinds Lab sessions (SolarWinds Lab), and especially THWACKcamp 2018, as I know the next great idea I have for my environment will probably come from a comment, idea, or reference of another THWACK user, as it has so many times before.


What’s in your widget? Do you have a cool mod you want to share, or is there a task you need to accomplish but don’t know how? Let us know, and maybe we can find the solution together.


Join me. Register now. Attend October 17-18, 2018.

THWACKcamp 2018 – Tips & Tricks: Thinking Outside the Box


          THWACKcamp 2018 is approaching fast, which also means it’s time for one of our most popular THWACKcamp sessions—Tips & Tricks.


          In this Tips & Tricks session, titled “Tips & Tricks: Thinking Outside the Box,” I will be joined by THWACK® MVP and SolarWinds Technical Content Manager Kevin Sparenberg as we delve into some of your favorite products and how to get the most out of them with these simple but powerful tips and tricks. We want to make sure your products are tailored to your needs and are adaptable to the nuances of your particular IT needs. Live demos with step-by-step direction will help you visually recognize the different capabilities available to you within some of your favorite tools, making it easy for you to play around with some of these cool features. Want to get more out of your Orion® Platform, particularly the Orion SDK? We’ve got you covered. Been thinking about how you can get ahead of API exhaustion for Office 365 or other, similar tools? Not a problem. Ready to learn more about the Millennium Falcon LEGO alert? Sure thing.


          In case you haven’t already heard, registration for THWACKcamp is open, so it’s time for you to sign up for this entirely free, 100% virtual, multi-track learning event. Take advantage of our comprehensive and totally entertaining sessions, featuring your beloved SolarWinds Head Geeks, as well as technical experts on the wide range of relevant and necessary topics in the world of IT.

Now that we all carry supercomputers complete with real-time GPS mapping in our pockets, a reference to physical maps may feel a bit antiquated. You know the ones I’m talking about; you can still find them at many malls or theme parks, and even some downtown city streets. It’s usually a backlit map on a pillar with a little arrow marking “you are here.” It’s designed to give you a sense of where you are and how to get where you're going. While that physical map may feel a bit dated, at least it’s still effective. That’s more than I can say for many of the InfoSec practices, products, and procedures we find at companies of all shapes and sizes.


That security gap is really not surprising though. Organizations and individuals alike are becoming more and more connected, while information and assets are becoming more and more digital. At the same time, the bad guys are becoming more and more organized and sophisticated. It feels like new threats, vulnerabilities, and breaches are announced every day. To keep pace, vendors seem to announce new products every week, not to mention all the new companies that are constantly popping up. As security professionals, we are left trying to sort out the mess. Which tools provide defense in depth, and which are just causing duplication? How do I even compare competing products and the protections they provide?


Luckily there are some models, frameworks, and best practices available to help us figure it all out.


Three of the most widely known and referenced are ISACA COBIT, ISO 27002, and NIST CSF:

  • COBIT is a "business framework for the governance and management of enterprise IT” published by the Information Systems Audit and Control Association (ISACA). Governance is the key word there; this is a high-level framework to help executives execute policies and procedures. It’s the widest in scope, is best used for aligning business objectives with IT and security goals, and can be thought of as a strategic base for the ISO and NIST frameworks.
  • ISO 27002 is a set of best practice recommendations for implementing an Information Security Management System (ISMS) published by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). It is essentially a list of checklists for operational controls that are used in conjunction with the requirements laid out in ISO 27001 to help ensure that your approach is comprehensive.
  • The Cyber Security Framework (CSF) published by the US National Institute for Science and Technology (NIST) is much more tactical in nature. Its most recognizable aspect is called the “Framework Core,” which includes five functions: Identify, Protect, Detect, Respond, and Recover. It also includes “Implementation Tiers” and “Profiles” to help you define your current risk management abilities and future/target goals within each of the functions.


A couple additional frameworks that are less well known but worth reviewing are RMIAS and ATT&CK:

  • RMIAS stands for Reference Model for Information Assurance & Security. This model "endeavors to address the recent trends in the IAS evolution, namely diversification and deperimeterization.” It describes four dimensions (security development lifecycle, information taxonomy, security goals, and security countermeasures) and incorporates them into a methodology that helps to ensure completeness, risk analysis, cost-effectiveness/efficiency, and consistency in your IAS practice.
  • ATT&CK stands for Adversarial Tactics, Techniques & Common Knowledge. It "is a curated knowledge base and model for cyber adversary behavior, reflecting the various phases of an adversary’s lifecycle and the platforms they are known to target.” In other words, it contains deep knowledge about how and where the bad guys are known to attack. Provided by MITRE, a non-profit R&D organization, it is gaining wide acceptance among practitioners and vendors alike as a common language and reference.


Of course, there are also a growing list of industry specific frameworks, models, and regulations like HIPAA, HITRUST, FEDRAMP, PCI-DSS, SOC, CIS, and more. While all of this is great, I’m still left with those same questions: Which tools provide defense in depth, and which are just causing duplication? How do I even compare competing products and the protections they provide?


What we require is a more practical model of the specific technologies needed to secure our organizations.


Through the remainder of this series, I will introduce and describe a reference model of IT infrastructure security that aims to fill this gap. Over the next four posts I will illustrate four technology domains (perimeter, endpoint & application, identity & access, and visibility & control), including the current drivers and the specific categories within each. Then, in the final post, I will describe how this model fits within the broader ecosystem of cybersecurity countermeasures and provide some advice on how to put it all into practice.

THWACKcamp 2018 – People Do Dumb Things: Why Security is Hard for IT Pros


We often hear a lot of discussion about high-level security, but these types of concerns aren’t really what the general public is facing on a day-to-day basis, at work, or at home. People who have a very limited or virtually non-existent background in IT might not even realize that the things they’re doing are putting their data, your data, and potentially even your business at risk. So what kind of security risks do we see from the vast majority of the people across all companies and organizations, and how do we actually resolve them?


In this THWACKcamp panel session “People Do Dumb Things: Why Security is Hard for IT Pros,” I’ll be joined by Broadway National Bank Sr. Network Security Engineer and THWACK® MVP Paul Guido, CS Disco Software Engineer/QA architect Mandy Hubbard, and Computer and Network Security Shaman Sandy Hawke to discuss all the most practical ways that you can keep team members from putting your security at risk. We’ll place some of our IT expertise on the back burner as we try to tackle these issues from an IT novice viewpoint, so we can come to realistic and meaningful solutions on how you can help prevent these small and large security fumbles from happeningin the first place. As security breaches become more commonplace, it’s important that we as IT professionals remember that these breaches don’t have to be the norm. Even the use of social media can hold potential threats that we need to think about and create safeguards for. This session is geared for people of all IT skill levels who want to improve their security—so basically, everyone.


Not yet registered for the premier IT event that thousands of your peers have already signed up for? No worries, you can register for THWACKcamp 2018 today! With two days of sessions—taking place October 17 – 18—THWACKcampprovides you with the opportunity to learn from SolarWinds Head Geeks and IT industry experts in a number of different fields, all for free and from the comfort of your laptop. Don’t miss out on this entirely free, virtual IT event that’s sure to take your IT game to the next level.

I’m sure you’re intrigued by the title of this THWACKcamp 2018 session (as you should be).


So, what exactly does the classic movie “The Seven Samurai” (which inspired the recent remake “The Magnificent Seven”) have to do with data protection? Well, quite a lot, as it turns out. After watching “The Magnificent Seven,” it dawned on me that there are striking similarities between the villagers and the way data is protected.


During this session, “The Seven Samurai of SQL Server Data Protection,” I’ll be joined by InfoAdvisors Senior Project Manager and Architect Karen Lopez as we break out seven different features that can help safeguard your data. Whether it’s data at rest, data in use, or data in motion, your data needs protection.


We’ll be highlighting key features of SQL Server, as well as defining how each of these features can be applied to the three different types of data. Transparent Data Protection, Dynamic Data Masking, and AlwaysEncrypted are just some of the features that we’ll look into and walk you through, so you can strike down data attackers at a moment’s notice.


Don’t want to miss this session or any of the others offered during THWACKcamp 2018? Well, then don’t! The event is entirely free and features SolarWinds Head Geeks and IT experts in a wide range of fields discussing the topics most relevant to you. You can even chat with these tech dynamos during the event! Be sure to register for this premier, online IT event, happening October 17 – 18.

Whether you’re a seasoned IT professional or a tech newbie just trying to get into the IT game, you’ve probably noticed that alerts can be a real pain—if not managed correctly.


Don’t let alerts control your life and bring down your monitoring. Join me and SolarWinds engineer Mario Gomez during the session “Alerts, How I Hate Thee,” as we hash out some of the real struggles that poorly crafted alerts can create,and then discuss practical solutions to improving your alerts and resolving these issues. Some alerting topics we’ll dive into include: understanding and leveraging the differences between an alert scope versus a trigger;  the best time to trigger an alert; best practices for testing your alerts; and options for sending notifications that give a break to your poor old email system. And of course, we’ll also look at some options for integrating alerting into external systems like Slack and ServiceNow as well as using automation to take your alerting to a whole new level. After all this discussion and analysis, we’ll all hopefully come out the other side hating alerts a little less and starting to enjoy the benefits proper alerting can have on monitoring.


This session is just one of many that you can look forward to during THWACKcamp 2018. Taking place from October 17 – 18, this two-day, premier online event is entirely free! Enjoy the event from the comfort of your computer—wherever that may be—as you learn from SolarWinds Head Geeks and a wide array of technical experts, all of whom bring their different backgrounds and areas of expertise to the table. Be a part of an important industry discussion that will gear you up for all the IT goals you want to meet in the coming year. If you haven’t already, be sure to register so you don’t miss out on this year’s THWACKcamp!

Modern network and systems engineers have a lot to deal with. Things like overlapping roles, constantly changing client devices, and new applications popping up daily create headaches beyond comprehension. The amount of data produced by packet captures, application logs, and whatever else you have to help troubleshoot issues is astounding and can leave even the most skilled engineer reeling when trying to track down an issue. The human brain is designed to recognize patterns and outliers, but it is simply not equipped to deal with the scale of today’s IT issues.


Over the past few years, data scientists and software developers have teamed up to try to solve this problem using an old paradigm but emerging technology: Artificial Intelligence and Machine Learning (AI from this point on). AI, put simply, is a program that is “trained” to monitor a given data set while looking for either specific or unspecific information contained within. By scouring the data, it can build complicated patterns and baselines, learning along the way what is “normal” for a given system and what isn’t, as well as forming predictions based on what could be coming next. From there, a few things can happen; from as little as flagging the information for later review by human eyes all the way up to a completely automated remediation of any notable issues discovered.


Right now, you’re either thinking “Sign me up! I want to AI!” or “Not another one of these…” And I don’t blame you. I’m all AI-ed out too. But that doesn’t mean it isn’t useful. Just do what I do when I hear yet another marketing person start talking about the rise of the machines. Cover your ears and yell. I don’t care about how a vendor is going to get something done, I just want to know what it will do for me. I’m a wireless network architect, a coder, and a technology lover. I am not a data scientist. Make my job easier and I’ll be your best friend at lunch (which you’re paying for). What can AI do for me and you as the ones keeping the lights on and moving our companies forward technologically?


The most basic use case I’ve come across is historical data correlation and actionable health prediction. Like I mentioned, I’m a wireless architect. I design and manage a massive wireless network across a large geographic area. Monitoring the network, applications, RF, and performance of hundreds of thousands of clients across the entire organization isn’t just a difficult job -- it’s impossible for a single group to do without some serious assistance. The tribal knowledge contained within each team I work with is impossible to impart on a single person or persons to monitor everything and correlate all that information into any sort of useful data. That’s where AI comes in. Using historical trends to predict new needs, looking at past failures to see when the next one will occur, knowing that certain events or even times of the year will cause application load we can’t handle, we can properly scale and prepare for nearly anything.


Now, as I mentioned above, I’m not a data scientist but I have no problem letting them make my job easier. As much as I hate the term AI (I’ll explain in a later post) and the marketing drives me crazy, I am welcoming it with open arms. I am also waiting with a baseball bat to take it down if anything goes wrong.


This is the first post in a series I will be writing about Artificial Intelligence and Machine Learning, which will cover how I use it, why I love it, why it terrifies me, and why words matter. Stay tuned for more. I promise it won’t be too dull.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.