Find yourself wondering what GITEX is, exactly?

 

Well, a few things come to mind right away. Dubai. Burj Al Arab. Burj Khalifa. ATMs dispensing gold bars. Air-conditioned boardwalks. Police driving Italian sport cars.

Plus, let’s not forget about Manousheh, Shawarma, and Sangak.

And this isn’t even getting into the conference itself!

 

GITEX, the largest technology event for the ME region, is returning for its 38th year. However, this one feels particularly special because we will be participating for the first time ever.

SolarWinds is excited to be a part of the crowd of up to 150,000 people expected to attend during the week of October 14 – 18, 2018.

 

So, what should you anticipate from us?

We managed to group together an extensive list of subject matter experts. What subjects, you ask?

For one, we will be discussing all things SolarWinds.

These experts will be discussing and demonstrating the latest updates and products, and explain how they can improve your work life. You’ve don’t want miss learning how useful these products can be.

Looking for even more IT insights and knowledge? Come talk to us.

You can drop by any time, or if you already have questions in mind, schedule a meeting with us ahead of time. Just follow this link to sign up!

 

 

We look forward to meeting with you and discussing how we can help you sort out your IT problems.

One other important thing to mention, SolarWinds has been known to give away great swag at trade shows. The only way to find out if it’s true is to stop by our booth!

 

See you in Dubai.

Have you registered for THWACKcamp™ 2018? What are you waiting for? This year’s event offers multiple sessions from both industry experts and SolarWinds Head Geeks™ on topics important to you, including mapping, logging, security, and more.

 

I’ll be joining SolarWinds director of engineering Dan Kuebrich in a session titled “‘Observability’: Just a Fancy Word for ‘Monitoring’? A Journey from What to Why.” We’ll be parsing through the hype surrounding observability to bring you a useful set of approaches to combine your metrics, logs, and even traces to make your operations life easier. You’ll learn about metrics abstraction, how to use logs as evidence in root cause investigation, and ways to observe transactions end to end as they travel through all elements of your infrastructure.

 

We’re excited to make this our best THWACKcamp yet. Don’t miss this premier, two-day online IT learning event, where thousands of attendees network with each other on the hottest topics in IT today. It’s our seventh year, and we can’t wait to (virtually) see you October 17th and 18th.

 

Browse all the sessions and register now for reminders, giveaway details and more.

 

See you there!

I don’t think there’s anyone out there that truly loves systems monitoring. I may be wrong, but traditionally, it’s not the most awesome tool to play with. When you think of systems monitoring, what comes to mind? For me, as an admin, a vision of reading through logs and looking at events and timestamps keeps me up at night. I definitely see that there is a need for monitoring, and your CIO or IT manager definitely wants you to set up the tooling to get any and all metrics and data you can dig up on performance. Then there’s the ‘root cause’ issue. The decision makers want to know what the root cause was when their application crashed and went down for four hours. You get that data from a good monitoring tool. Well, time to put on a happy face and implement a good tool. Not just any tool will do though--you want a tool that isn’t just going to show you a bunch of red and green lights. For it to be successful, there has to be something in it for you! I’m going to lay out my top three things that a good monitoring tool can do for you, the admin or engineer in the trenches day in and day out!

 

Find the Root Cause

 

Probably the single best thing a (good) systems monitoring tool can do is find the root cause of an issue that has become seriously distressing for your team. If you’ve been in IT long enough, the experience of having an unexplained outage is all too familiar. After the outage is finally fixed and things are back online, the first thing the higher-ups want to know is “why?” or “what was the root cause?” I cringe whenever I hear this. It means I need to dig through system logs, applications event logs, networking logs, and any other avenue I might have to find the fabled root cause. Most great monitoring tools today have root cause analysis (RCA) built in to their tool. RCA can literally save you hours and days of poring over logs. In discussions about implementing a systems monitoring tool, make sure RCA is high on your list of requirements. 

 

Establish a Performance Baseline

How are you supposed to know what is an actual event or just a false positive? How could you point out something that’s out of the norm for your environment? Well, you can’t, unless you have a monitoring tool in place that learns what normal activity looks like and what events are simply anomalies. With some tools that offer high frequency polling, you can pull baseline statistics for behavior down to the second. Any good monitoring tool will take a while to collect data and analyze it before producing metrics that have meaning to your organization. Over time, the metrics collected will learn adaptively, and constantly provide you with the most up-to-date, accurate metrics. Things like false positives can eat up a lot of resources for nothing. 

 

Reports, Reports, Reports

When there are issues that arise, or RCA that needs to be done, you want the systems monitoring tool to be capable of producing reports. Reports can come in the form of an exportable .csv file, .xls file, or .pdf. Some managers like a print out, a hard copy, they can write on and mark up. With the ability to produce reports, you can have a solid history of network or systems behavior that you can store in SharePoint or whatever file share you have. Most tools keep an archive or history of reports, but it’s always good to have the option of exporting for backup and recovery purposes. I’ve found that a sortable Excel file that I can search through comes in very handy when I need to really dig in and find an issue that might be hiding in the metrics.

 

Systems monitoring tools can do so much for your organization, and more importantly, you! Make sure that when you are looking for a systems monitoring tool, sift through all the bells and whistles and be sure that there are at least these three features built in… it might save your hide one day, trust me!

In Orlando this week for Microsoft Ignite. Lots of great announcements happening, I'm trying my best to digest it all and it's best to talk data with a friend. So, stop by the booth, say hello, and let's chat.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Microsoft Managed Desktop plan turns Windows 10 device management over to Microsoft

Microsoft Managed Desktop is coming, giving Microsoft yet another revenue stream as the company is looking to become the biggest MSP the world has ever seen.

 

Amazon's Alexa knows what you forgot and can guess what you're thinking

“Alexa, why did I walk in this room?”

 

Google Says It Continues to Allow Apps to Scan Data From Gmail Accounts

This is why we need GDPR rules for U.S. citizens. Or, this is why we should stop using Gmail. One of those. Maybe both.

 

Dutch supermarket giant uses blockchain to make orange juice transparent

Great example of a company applying the wrong technology to solve a problem that doesn’t exist.

 

Remote only

Wonderful summary of how remote working can thrive and be successful for a company. I’ve been working remote for 8+ years now, and I can’t imagine going back to an office again.

 

Amazon’s gadgets don’t have to be pretty so long as they’re cheap

And Apple products don’t have to work, so long as they are expensive.

 

The Decision Matrix: How to Prioritize What Matters

Based upon the Eisenhower Matrix, the Decision Matrix helps you determine which decisions you need to make, versus those you can delegate. This is brilliant.

 

The room keys for Microsoft Ignite are sponsored by VMware and I don't know what is real anymore:

 

We’re sure getting excited for THWACKcamp™ 2018, and I’m particularly excited for you to join me at one of my sessions, There’s an API for That: Introduction to the SolarWinds® Orion® SDK. You won’t want to miss out on this exciting and informative session, so be sure to make it a part of your session track planning, along with the other highly-anticipated THWACKcamp sessions.

 

During this session, I’ll be joined by two fellow THWACK® MVPs. You’ll be able to watch history in the making, as it’s the first time three MVPs present a THWACKcamp session together. On top of being THWACK all-stars and IT experts, Zack Mütchler (zackm) is an experienced software engineer and Leon Adato (adatole) is a SolarWinds Head Geek™. Together, the three of us will deep dive into the Orion Platform API, which is readily available for anyone who has installed and is working on the Orion Platform. As the title implies, this session is not just about the API, but also the Software Development Kit (SDK), which is essentially a bunch of tools that can help you use and interact with the API. With all this already at your disposal, what next? Tune into this session so you can learn exactly what the SDK and API are capable of, and how you can use both to their greatest potential, including removing some of the repetition involved with your tasks, and in the process, make your life a little easier.

 

THWACKcamp is the two-day, premier online IT event that anyone involved in or interested in the IT world should be a part of. A top-tier event like this must be pretty expensive, right? Not at all. You can take part in this entirely virtual, totally awesome IT learning event for free! You don’t even have to worry about travel costs, since it can be accessed anywhere there’s Wi-Fi. Register now for THWACKcamp 2018, taking place October 17 – 18. See you there!

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

Remote management can be much easier to get wrong than to get right. Getting it right takes an understanding of your current needs, infrastructure, and how things may evolve; an understanding of what you need to manage; and how best to perform remote management within your agency.

 

We suggest three steps: planning, investing in the right tools, and taking a big-picture approach.

 

Step One: Planning

 

As with any new project or implementation, you must know where you are and where you want to go before you can get started.

 

Start with discovery. Then, gain an understanding of compatibility and performance: are databases, routers, severs, applications, and operating systems all performing optimally? Perform baseline measurements to be sure you can differentiate between normal activity and a potential “event,” or identify trends that may not occur as suddenly but have a similarly large impact.

 

The next part of planning is understanding the future. Will there be architecture changes or a need for increased bandwidth? Take as much into account as possible with the understanding that there is no way to predict every change.

 

Finally, the planning phase includes goals: what do you want to manage?

 

Step Two: Investing in the Right Tools

 

Invest in tools that can give you all the information and control you need. For example, consider tools that give you information about remote devices before you connect to them. Taking that concept even further, consider tools that do far more than simple remote control or basic troubleshooting. Remote administration should be your goal if you’re looking to maximize your time and investment.

 

When selecting which tools to use, one must consider scalability and flexibility, as well as government security requirements such as smart card support.

 

By investing in the necessary tools to assist in diagnostics, repairs, and management, agencies can prepare their networks to solve issues that arise.

 

Step Three: Taking a Big-Picture Approach

 

A complete approach will involve best-in-class tools, each of which specialize in a specific part of remote management. That said, the most important point is to be sure all those tools work together.

 

Especially when managing a highly disparate infrastructure, it is absolutely necessary to be able to see the bigger picture—to see how everything is operating as a whole, and to be able to drill down to the individual network, server, or end-user device to troubleshoot and solve problems that may arise.

 

Conclusion

 

Successful remote management can help federal IT teams manage multiple disparate systems at once, from a single management point; remotely diagnose and rectify problems within the network; and leverage automation to improve efficiency—all of which can help teams more effectively focus on the agency’s mission.

 

Find the full article on Government Technology Insider.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

SolarWinds has committed $75,000 to support American Red Cross, All Hands Volunteers, and the Food Bank of Central and Eastern North Carolina in their disaster relief efforts assisting communities affected by Hurricane Florence. In addition, SolarWinds will match U.S. employee donations to these organizations through October 31, 2018 on a $1 for every $1 basis.

"In times like this, solidarity is what sees us through the storm." - SolarWinds CEO Kevin B. Thompson

SolarWinds has also committed a minimum of 1,000 employee volunteer hours over the next six months to organizations who will continue recovery and rehabilitation efforts, including The American Red Cross, All Hands Volunteers, and the United Way of Coastal Carolina. Employees will be able to contribute up to one week of their time without impact to their paid time off. SolarWinds is also supplying temporary housing and local aid to affected employees.

 

All of us at SolarWinds encourage THWACK members to join us in contributing to these organizations – whether you are able to give funds, or volunteer your time, your gifts are critically important to assist communities impacted by the disastrous flooding caused by Florence.

 

To donate money to Hurricane Florence relief, please visit:

 

To assist with volunteer efforts, please visit:

 

If you have any questions about the SolarWinds #GeeksThatGive response to Hurricane Florence, please reach out to pr@solarwinds.com or @jennebarbour.

Have you talked to your users lately? Here's a fun task for you this week or next: Take one of your most problematic users out to coffee. Just tell them you're going to grab a latte together and you just want some feedback. When you have a nice hot cup of java, ask them what their biggest pain point is in their job when it comes to IT. Now, you're not allowed to make declarative sentences. You can only ask questions. You can't defend the network or the virtualization stack. You can only listen and ask questions. Make sure you write down your responses.

 

It's Not The Network

Okay, while you're looking on your calendar for a good day for coffee, I'm going to tell you what your problematic user is going to say. Or, more accurately, what they aren't going to say. They won't talk about network latency or wireless coverage or even slow disk speeds. Do you know what they're going to complain the loudest about?

 

Applications.

 

Why? What is so special about those irritating programs? After all, we as IT pros don't even notice them. We're so busy keeping our systems running that we barely even notice when something is amiss in the land of pretty software buttons and screens.

 

But, for your users, the application is the gateway to your infrastructure. We spend so much time focused on switches and SANs that we forget that those are just pieces to a larger puzzle. We monitor and produce volumes of analytics about response times and availability and yet we barely even acknowledge when someone is telling us something is slow or misbehaving. Why? What should we be focused on?

 

Now you see why the coffee is so important. And why asking questions instead of getting defensive is key. You can't justify a bad user application experience with talk about how optimized your infrastructure is. Users don't care. They only see what their application sees. Slow performance can be anywhere. We use our tools and monitoring infrastructure to find it, but the users only see bad performance. We need to listen to their perception of things in order to understand why things are running the way they are.

 

I used to have an entire speech ready to go on a moment's notice when a user told me something was slow. It sounded a lot like a scene from Ace Ventura when the detective is overwhelming someone with questions and facts about the minutia of every detail in the operation of a computer. But, I was comically missing the point. Because the users didn't care about startup times or application loading screens. They just saw things not running the way they should and wanted to tell me about it.

 

Be Invisible

If you want to be the best IT professional that you can be, you need to disappear from the user's field of view. Your infrastructure needs to melt away until the application is the only thing they see. It's not all that uncommon when you think about it.

 

When's the last time you complained about a specific piece of the electrical grid? Or about the sewer system outside your house? Most of us understand how these systems work on a basic level but we don't have a clue how they operate behind the scenes. Sure, there are a ton of things that go on outside of our view. And we know little about any of them, even when they break. The utility companies do their best to monitor and maintain infrastructure. But they don't spend countless hours telling anyone that will listen how their distribution nodes couldn't possibly be the cause for brownouts.

 

Applications don't care about infrastructure. And by extensions, users shouldn't either. We need to spend more time with our heads down trying to make the network and other infrastructure systems functional instead of justifying why they can't be the problem. If we build the infrastructure to be bulletproof, I think we'll quickly find that the rest of the software will follow suit.

It’s almost that time of year! We’re in full preparation mode for THWACKcamp™ 2018 and can’t wait for you to experience our sessions. Covering everything from network monitoring and management, virtualization, logging, and more, this year’s lineup goes further into making sure you hear the best from our pool of tech maestros (including industry experts and SolarWinds Head Geeks™) who are as passionate about these topics as you are. If you haven’t registered, be sure to do so soon!

 

While all the sessions sound enticing, I’m inclined to recommend you attend “Monitoring Microservices: IT’s Newest Hot Mess.” In this session Karlo Zatylny, SolarWinds principal architect, and Lee Calcote, SolarWinds head of technology strategy, and I will show you how microservices are different from other applications, when performance bottlenecks most frequently occur, and where you can add monitoring for microservices to stay one step ahead of trouble. We’ll also discuss how to extend existing infrastructure dashboards to include microservice workloads, cut troubleshooting time, and add new business metrics that measure the business goals driving microservices in the first place.

 

Now in its 7th year, THWACKcamp is the two-day premier virtual IT learning event that brings fully packed session tracks to help you learn and grow even more as a technology professional. Join thousands of skilled admins around the world in registering for this 100% free event. You won’t want to be anywhere else on October 17-18.

 

Browse all the sessions and register now for reminders, giveaway details, and more.

 

See you there!

We have all heard the saying “It’s who you know, not what you know.”  There is truth in the statement. Your personal network can help you succeed in your career and take you places you would not have considered or been given the opportunity. I am a living example of that saying. Without my professional network, I would not be where I am today in my career. You need to invest in your professional network to help build your career. This is one network that doesn’t require complex firewalls.

 

Why do you want to build your professional network?

 

Resumes are only part of the equation when it comes to your career. Your resume is your history of events, but your professional network is what encompasses all that history and completes your story. Networking is more than helping you find a new job. The people you meet can expose you to ideas and interests that you may not have ever considered. When you build your network, you are not only expanding your own knowledge, but also the knowledge of the people you meet.

Meeting more people leads to more opportunities, which leads to meeting more people and more opportunities, and the cycle continues to grow. Often, jobs are not posted and if someone is in your network, they may reach out to you if you’re a fit. People recommend people they like; there is no other way to put it. You never know if you might find a dream job simply by meeting someone new.

You don’t need to be looking for a job to use your network. I have reached out to my network countless times on certain projects I have worked on for ideas or recommendations. The same is true for my contacts as they have reached out to me for advice as well. Networking builds relationships that can help deliver results down the road.

 

How to build it and keep it strong

 

Making the time – You must plan and commit time to networking. This can be done by going to local meetups or conferences. You also must be present to meet people. Talk and engage with others. You may be nervous if you don’t know anyone, but keep in mind there are probably others in the same boat as you. You don’t have to be the social butterfly of the room. Try introducing yourself to one person and see where it goes from there.  If you’re at a meetup, most likely you’re in the same industry with similar work or technologies.

Have the right tools – Having the right tools is essential. Create that LinkedIn profile if you haven’t done so. Carry a few business cards with you. Yes, people still carry business cards in this digital age. If you don’t have business cards for your job, there is nothing stopping you from creating your own personal business cards. I have a set of work business cards and a set of my own personal branded cards. Depending on the situation, I will hand one of them.

Online networks – Connecting and meeting with people in person is great, but sometimes that is not always possible. Online forums and communities are a great way to expand your network.  You can build credibility by helping answer questions and giving your insights. THWACK and Microsoft Tech Community are great places to start because they have many groups you can be members of.  If you’re looking for more specialized communities, the VMware VMTN and VMUG communities are another great spot for online engagement.

Stay connected and in touch - Making the connections is one thing, but staying connected will build and strengthen your network over time. Connect via LinkedIn. Engage in conversations through the online communities you are a part of. Don’t be afraid to post a comment if you read a great post by someone. Using social media like Twitter is another great way to connect with others in the industry. There have been so many great opportunities provided to me through Twitter. No one says you need to be a Twitter celebrity to join the conversations. Follow people in the industry and see what conversations can bring about. If you’re unsure of who to follow, you can always start off with @exchangegoddess…

This week's Actuator marks the last edition published during the summer. Soon the leaves will turn brilliant colors, fall to the ground, and bury my new fire pit. With all the planning we did, we never thought about leaves, or snow. The best laid plans, right?

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Postmortem: VSTS 4 September 2018

A nice review of the Azure outage on the 4th of September. Use this to review your own business continuity planning. Reading between the lines here, I can see that Microsoft was in the process of updating their regions to have better redundancy, but they couldn’t get it done fast enough before they got caught.

 

Quantum computing is almost ready for business, startup says

If you want to get in on the ground floor of new technology, then quantum computing is your chance. I like the use of prize money to stimulate initial research, but I think they need to make it a bit higher than $1 million.

 

Torvalds breaks with past, apologises for abrasive behaviour

Admitting you have a problem is the first step. Here’s hoping Linus makes it to the “amends” part. One email will not undo decades of trolling.

 

Scientists Are Developing New Ways to Treat Disease With Cells, Not Drugs

After decades of research, this tech is starting to get more publicity. This tech is how I believe we will develop methods to treat cancer.

 

Google admits changing phone settings remotely

Instead of “don’t be evil,” maybe Google should try “don’t be careless.”

 

Your IoT electrical outlet can now pwn your smart TV

This is why we can’t have nice things.

 

The Quest To Find The White Elephant

Every now and then I like to share links that help us all remember the value of living in the moment. This is one of them.

 

Ah, the annual state fair. Here's a couple hundred turkey legs, ready for delivery to my belly:

 

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

Much has been written about aging government IT networks, but not enough attention has been paid to the maturity of those networks. While it’s important for agency IT professionals to modernize legacy networks, it is equally critical for them to ensure that their infrastructures are mature enough to handle rapidly changing security requirements. They must have faith that any potential threats or problems can be addressed and remediated quickly.

 

In addition to looking at various network connections, IT professionals must consider the policies and procedures they use to enforce network security. Are current practices adequate for responding to current and future threats?

 

A majority of respondents to a recent SolarWinds cybersecurity survey indicated they have “good” IT controls for addressing these questions. They are managing security to the expectations of their policies.

 

However, other respondents listed their controls as “excellent.” They are going beyond just meeting policy expectations and, as a result, are seeing greater success with risk monitoring and mitigation. They feel better equipped to handle potential threats and undoubtedly share two common understandings.

 

First, they recognize that network intrusions are likely to happen and are preparing accordingly. Second, they are willing to embrace change.

 

Those two beliefs are important for creating mature networks that are ready to handle potential threats.

 

The Network Will Be Hacked—It’s Just a Matter of How Badly

 

Our cybersecurity survey revealed increasing concerns about careless, untrained, or malicious insider threats. The latter is especially disconcerting, as malicious insiders are more likely to be aware of how to beat internal processes.

 

An agency-wide proactive approach to network security is helpful. IT managers should initiate comprehensive and frequent security training for all agency professionals to help them become more cognizant of the tactics used to infiltrate networks and show them how they can help prevent attacks.

 

Accept and Embrace Change

 

When the Defense Information Systems Agency introduced its Security Technical Implementation Guides and Command Cyber Readiness Inspections, there was a palpable sense of nervousness—and even paralysis—among some people in the federal IT community. Many wondered how the new guidelines would affect their ability to do their jobs. Others were concerned about how to effectively prepare their agencies to meet DISA requirements.

 

But change is an inherent part of an IT manager's job, and the ability to manage change is essential, particularly when dealing with today’s escalating and evolving threats. Security processes must be readily adaptable to new needs and requirements. When new security policies are issued, it is because leaders perceive a potential threat that requires a different type of reaction from agencies. IT teams must be ready to work within those new policies, even if they must modify their approaches to do so.

 

The government cannot afford the equivalent of what took place in Atlanta, where the SamSam ransomware attack left the city scrambling to restore critical resources. Agencies need strong, mature networks that can quickly and automatically identify and fix issues in minutes as opposed to hours or days. With the right mix of policies and tools—and the right mindsets—teams can successfully raise their networks’ maturity levels to comfortable points.

 

Find the full article on GCN.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

Recently a coworker was giving a talk and he just froze up. When I asked him about it later, he said, "What happened yesterday has never happened before. It was like my throat clenched up and I couldn’t get words out. It psyched me out."

 

For just a moment, I'd like you to think about a time you failed. Like, REALLY failed. I don't mean a small "I took the wrong exit on the highway" goofs. Or the "I forgot my Mom's birthday" mess ups. I'm also not talking about a time you were part of an organizational or team failure where you look back and think, "Why didn't I speak up? Why didn't I step up?"

 

I'm also not talking about failures which were embarrassing, but in retrospect sweet and kind of normal. Like the time a certain freshman asked out the homecoming queen to prom. (Yes, I did. No, she didn't accept, but she was nice about it. Everyone else in the hall by the quad where I decided to ask her? Not so much.). Nope. I want you to think about a moment when you really really blew it. Dropped the ball. Failed to deliver.

 

Now, as you sit there, possibly wallowing in uncomfortable feelings (and maybe even feeling resentful that I brought it up), I'd like to suggest that you understand something really important:

 

It happens to everyone.

 

As evidence, I'd like to present a few case studies, including the one with my coworker.

 

Case #1: Yours truly.

In my senior year of college, I heard about an off-Broadway production of "Sweeney Todd" that was being produced by some folks I'd worked with previously.

 

That sentence deserves to be un-wound for those who aren't familiar with the New York theater scene.

1) Off-Broadway means getting paid, but it also means getting your Actor's Equity card. This is a Big Deal.

2) "Being produced by some folks I'd worked with previously" means I had a potential leg-up in the audition process. Getting cast wasn't a sure thing, but an in is an in.

3) "Sweeney Todd" was (and still is) my all-time favorite production.

 

Given my age and vocal range, I would be auditioning for Toby, my all-time favorite part who sings my all-time favorite song ("Not While I'm Around") in my all-time favorite play. I planned, I prepared, I rehearsed. As a senior in a large theater program, I had all the tools I needed—coaches, head shots, the works. I showed up for the audition. I knew two out of the four people in the room. We made some small talk. They asked if I was ready. The piano started.

 

I missed my entrance.

 

No biggie, everyone said. It happens. The pianist started over.

 

I missed it again. And again. The pianist tried to mouth the words to help me. No dice.

 

Confidence plummeting, panic rising, it was like I was underwater. I couldn't hear the notes any more. Couldn't find my voice.

 

I muttered an apology and got out of there as fast as I could. As I walked down the street I tried to wrap my head around what happened. There was no sugarcoating it, no handy excuses. I had bombed what should have been a sure thing. Instead of a home run, I had struck out on a slow-mo’, underhand toss, softball pitch.

 

Wallowing in my sense of defeat and embarrassment, the only thing I had to fall back on was a story my Dad had told me.

 

Case #2: My Dad.

This is a story about my Dad, Joseph Adato. THE Joseph Adato. Which sounds funny until you realize he's kind of a big deal in classical music circles (for examples, start with this book. And this one.)

 

He started playing drums at 10. At 18, he was playing in New York Philharmonic and the NBC Symphony of the Air on an as-needed basis. So then one day he gets called up to the big leagues. There was a full-time opening in the New York Philharmonic percussion section. Slam dunk, right? He shows up, music under his arm. Everything is set. His long-time teacher and a few other orchestra members are sitting there. All faces he knows. 

 

He bombed it.

 

When he told me the story, he said, "It might as well have been bugs on the page. I had NO IDEA what I was looking at."

 

He knew the music. He probably could have played it from memory if he thought about it. But he honestly could not tell what he was looking at. He apologized, walked off, and went home.

 

Case #3: Lily Tomlin

Back in the 80s, I had the pleasure of seeing Lily Tomlin perform her one-woman show "The Search for Signs of Intelligent Life in the Universe." The show itself was amazing, but at the very start, something incredible happened that changed the way I looked at "failure" from that point forward.

 

Ms. Tomlin came out on stage and began her monologue. And then, mid-sentence, she stopped. Took a deep breath. Said (mostly to herself), "umm...."

 

It was clear she was off, that something wasn't clicking for her. And what she did next stuck with me. She looked out at the audience. Not the way people look when they are performing—kind of a hazy "stare at the back wall" kind of way. She looked around at the people sitting in the audience. She acknowledged them.

 

At that moment, even as a theater student, I had no idea what would happen next. But I knew it wouldn't be any of the cliched responses you see or hear about—people freezing, running into the wings in tears, covering their face in their hands, etc.

 

Ms. Tomlin just stood there, smiling, taking us all in. Then she said, "I know this is going to sound funny, but this is a little overwhelming for me today. Do you mind if I just grab a glass of water for a second?"  Someone from the wings came on and handed her a bottle of water and she walked to the front of the stage, sat on the edge, and made small talk. With us. She asked about the weather outside, how traffic was getting to the theater, that kind of thing. Then about three minutes later, she said, "Okay, I think I'm good. Thank you," and she got up. She said, "Let's get this thing started," and she launched into her opening monologue.

 

Lessons Learned:

With my two failure stories (mine and my Dad's) under my belt, I thought long and hard about what I'd just witnessed. Here's what I learned.

 

First, if you are overwhelmed, or scared, or confused, own it. Don't try to shove it under an emotional rug because the result is that ALL of your emotions become inaccessible. Even if you are giving a quarterly report, you need to be fully present as a human being or bad things start to happen.

 

Second, remember that everyone wants you to succeed. Think about going to the circus. Do you WANT to see the tightrope walker fall to their... well, not death, but their embarrassment? No. You want to see the struggle, you want to know it's not all fake, but you want to see them succeed. You are literally CHEERING for them to succeed.

 

We're all like that. We watch someone up there giving a talk and we want them to be brilliant, to teach us, to make us laugh. And when they misstep, we don't immediately write them off. We think, "Come ON! You can DO it!"

 

So when YOU go up there, remember that is what is in everyone's mind. Every single person in that audience is silently hoping that you will be incredible. They are cheering you on. The applause has started before you say your first word.

 

If you keep that in mind, A LOT of the jitters go away. It becomes clear, and even urgent, that you work through any challenges, whether they last a moment, an hour, or a week.

 

All of this—my experience and my Dad’s and Lily Tomlin’s—was a large part of the conversation I had with my coworker as we talked through it. Let's be clear, his freezing up wasn't the end of his life or his career. It wasn't even the worst part of his week. (Hey, we all have weeks like that, right?) I told him, "So yesterday happened, but ‘yesterday’ has happened to everyone. Dad. Adele, Elvis. Pavarotti. All of them. You're in good company."

 

He said, "I hate to rejoice in your story of epic failure, but it's comforting to know I'm not alone."

 

I replied, "You aren't. You're rejoicing in the normalcy of it, in the reassuring consistency of the human condition and experience."

 

But the next time he got up to speak, it was clear he was approaching things differently. No, he didn't stop in the middle and say, "This is really overwhelming." He didn't need to. He was on top of it. But sometimes that's the point. If he did need it, the trick was there for him to use.

 

Sometimes, just knowing we have a tool in our back pocket makes the difference between success and failure.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

 

Perhaps the best part of being an IT pro—of being a technology professional in general—is that you don’t really have a choice. It’s an irrepressible self-nomination to a task force of incredibles, who prefer to wield their powers out of sight, but hope the results change the world for the better. And when you rise to the calling, it’s in many ways a relief. Think about it. How often do you question your career choice? Absolutely we question the specific area of tech (or at least I do every now and then) but never the field. How many people do you know in other jobs that can say that?

 

It doesn’t matter if you’re just starting out or have been shredding the command line for 30 years. Today you might be reallocating hundreds of containers for a global process queue, but you were probably just as excited back when you took over Group Policy management in desktop support or terminated your first cable. We are happiest engineering the changes that make other people’s lives better, and ideally without visible fuss.

 

Once again, IT Pro Day is a chance to thank all those who keep the wheels of IT turning while making it all look easy. It’s a day to celebrate the increasingly enormous diversity of the technologies that IT pros manage. And this year I hope that you all sense something in the ether: that smart businesses are beginning to listen when we make suggestions to help. That “civilians” are less and less interested in the peculiarities of our specific jobs and are more and more interested in the fact we are technology professionals. Less that we are experts in one thing, and more that we can become experts in almost anything.

 

Maybe the world of business is finally sequencing the IT pro genome only to discover it’s our common helpfulness and flexibility chromosomes that define our species, not just a penchant for jargon or geek humor. Or maybe it’s that we continue to follow our passion to go where we can help, and we realize it’s not just systems, but our business that benefit from a little professional advice.

 

So, here’s to you, the unassuming heroes who keep technology working. We know who you are and we’re really glad you came to work today.

 

Cheers,

 

Patrick Hubbard

Dez

IT Pro Day

Posted by Dez Employee Sep 18, 2018

          Guess who's back, back again, IT Pro Day, tell a friend! SolarWinds has once again allowed me to circle the sun as a SolarWinds Head Geek. To me, IT Pro Day is something of a celebration of achievement in goals every year that I once could only imagine. Curiosity has led me down numerous certification paths and even back to college a few times. I celebrate every new mind-expanding opportunity that I’ve been allowed in my career.

 

          Today I was asked, “As a technology professional, what would you change if you had the time, resources, and ability to use your tech prowess to do absolutely anything?” Great mind-mapping question indeed. This led me instantly to wonder whether or notif I were to be given unlimited time and moneyI would want to focus on becoming a teacher of cybersecurity and information assurance within STEAM programs. I mean, after all, security is an art that needs to be appreciated at all levels.

 

          Spreading knowledge, especially within IT security , is something I believe in passionately. There’s currently a huge gap in security professionals, and by golly, if I have anything to say or do about it, I want that to change quickly! I now work with vocational teachers and help to encourage teachers and young students to dig in and be creative with IT.

 

          If we’re not investing in the next generation, then how do you expect to have a product to meet their future needs? You have to carve out the time to hear out their mindsets, and understand how they approach and solve problems. This allows you—whether as an individual or a companyto provide your future customers with services, products, and even marketing that will enable you and your company to be relevant to them.

 

          Personally, this IT Pro Day has me thinking about how I can contribute more to things like STEAM programs and Cyber Days for students of all ages. It starts with an idea and can grow into a habit once you allow yourself a little time. I, for one, will start planning my days with at least 10 minutes brainstorming how I can be an IT contributor, and not just a consumer.

 

Destiny Bertucci, Head Geek, SolarWinds

Today is the 4th annual IT Pro Day, a day created by SolarWinds to recognize the IT pros that keep businesses up and running each and every day, all year long.

 

As an IT pro, I personally know that no one ever stops by your desk to say “thanks” for the fact that everything is working as expected. No, people only contact IT pros for one of two reasons: either something is broken, or something might become broken. And if it’s not something you know how to fix, you’ll be expected to fix it, and fast.

 

Nearly 70% of IT pros respond to one-off user requests daily. This amount of unplanned work leads to madness for mere mortals. The unplanned work doesn’t stop, either. IT pros are the first-level tech support for friends and family. Thanks to the ever-connected world in which we live, IT pros are responding to calls for help at all hours of the day.

 

Put this all together and it is easy to understand the best IT pros are one-part Batman, one-part MacGyver, and three parts Dr. House. We respond to alerts when called, we fix things in creative ways, and we do it all while reminding you we “are almost always eventually right.”

 

That’s right, IT pros can see the future. We know all viewpoints will eventually be consistent with ours. It is inevitable that there will come a point in time when your data will outgrow your current schema, code, and hardware. We know this because that’s been our normal ever since Codd invented databases.

 

IT pros spend hours finding ways to automate away tasks. Automation is a great way to help reduce risk and recover from failures. It’s also a great way to help get some sleep at night, and on weekends. Maybe even spend time working in the yard, building a nice firepit, where we sit and relax for 5 minutes before we fix the neighbor’s Wi-Fi.

 

We don’t do this for the money. We do this because we want what everyone wants: happy customers. With end users as our top priority, we want to help good people from making bad decisions. Sure, money helps, but that’s not our end goal. (But if someone in a corner office on the 4th floor in Austin is reading this, I want to remind them that bacon makes for a great gift during the holidays.)

 

Today is the day to say THANK YOU to the IT pro, and even give a #datahug to the ones that had enough time to shower before heading to the office.

 

Cheers!

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

Leon Adato

Happy IT Pro Day!

Posted by Leon Adato Expert Sep 18, 2018

As we all know, lists and their thinly-veiled derivatives, listicles, drive social media. Post the "Top 5 Kinds of Bellybutton Lint" and you'll probably get at least a few clicks from people with 5 minutes and nothing better to read. One of the popular lists going around right now are "The Smartest People I Know Do xxx"-type lists. Attributed to everyone from Bill Gates to Abraham Lincoln, they supposedly offer a window into the habits of the rich, famous, powerful, and successful.

 

Of course, many of these kinds of lists have as their honored ancestor the book which arguably started the self-help book trend, Stephen Covey's "Seven Habits of Highly Effective People."

 

Early in my career, I was dutifully reading through it when my boss, Maria, asked me, "What makes anyone think that Covey, or the people he used as sources for that book, were actually effective?" I was caught up short. I mean, the book had already sold over 25 million copies. But my boss knew her stuff, and she was, like all the best tech professionals, asking to see the data before she wasted a single processing cycle on executing those instructions.

 

I thought about what she taught me, as we were ramping up for IT Professional Day 2018. It's not that Covey's book or those listicles are necessarily wrong, it's just that they're just not demonstrably true, either. There's no data. Which is why I'm so proud of the Tech Pro Day survey (https://www.solarwinds.com/resources/survey/tech-pro-survey-north-america). Rather than ask thought leaders or folks in tech management what they THINK would be effective, we asked boots-on-the-ground IT pros what they do and how they relate to the tech that makes up so much of their world. The survey applies data to understand what effective and engaged IT practitioners are doing, both to be effective and to keep themselves feeling engaged.

 

What we learned painted a picture of the habits highly effective tech pros.

  1. We help others. Even when it's not strictly our job, we answer help tickets and take "drive by" questions.
  2. The user is never far from our mind. Their experience, their needs, the tasks they are trying to complete are paramount.
  3. When new tech comes on the scene, our first thought is how to use it make things better close to home—the business, our day-to-day tasks, and so on.
  4. But our second thought is how to use it to make the world better—education, housing, healthcare, the environment, and more.
  5. We honestly love the tech we've built a career around, so much that we use our free time to build our skills; we incorporate tech into our home projects; and we even leverage tech to make our vacations more, well, techie.

 

More than anything, what showed through the data was how engaged we are with the industry. Not content to wait for the latest innovation to roll into our shop (or over us like a techno-tidal wave), we actively seek it out, play with early betas, share ideas on forums, and to generally be the best at what they do.

 

You could say that the number one habit of highly effective IT professionals is to be Tech PROactive.

 

So, however YOU plan to celebrate, acknowledge, or observe IT Pro Day this year, everyone here at SolarWinds want you to know that it's no baseless rumor, no urban legend, but hard data-sourced fact: Your skills are essential to the business and your work is appreciated. You are awesome.

 

 

 

THWACK.com

The collection of operational and and analytics information can be an addictive habit, especially in the case of an interesting and active network. However, this information can quickly and easily overwhelm an aggregation system when the constant fire hose of information begins. Assuming the desire is collection and utilization of this information, it becomes clear that a proper strategy is required. This strategy should be comprised of a number of elements, including consideration of needs and requirements before beginning a project of this scope.

 

In practice, gathering requirements will likely happen either in parallel or, as with many things in network, adjusted on-demand, building the airplane as it is in flight. Course correction should be an oft-used tool in any technologist's toolbox. New peaks and valleys, pitfalls, and advantages should be referenced in the constant evaluation that occurs in any dynamic environment. Critical parts of this strategy should be included in consideration for nearly all endeavors of this kind. But even before that, the reasoning and use cases should be identified.

 

A few of the more important questions that need to be answered are:

 

  • What data is available?
  • What do we expect to do with the data?
  • How can we access the data?
  • Who can access what aspects of the data types?
  • Where does the data live?
  • What is the retention policy on each data type?
  • What is the storage model of the data? Is it encrypted at rest? Is it encrypted in transit?
  • How is the data ingested?

 

Starting with these questions can dramatically simplify and smooth the execution process of each part of the project. The answers to these questions may change, too. There is no fault in course correction, as mentioned above. It is part of the continuous re-evaluation process that often marks a successful plan.

 

Given these questions, let’s walk through a workflow to understand the reasoning for them and illuminate the usefulness of a solid monitoring and analytics plan. “What data is available?” will drive a huge number of questions and their answers. Let’s assume that the goal is to consume network flow information, system and host log data, polled SNMP time series data, and latency information. Clearly this is a large set of very diverse information, all of which should be readily available. The first mistake most engineers make is diving into the weeds of what tools to use straight away. This is a solved problem, and frankly it is far less relevant to the overall project than the rest of the questions. Use the tools that you understand, can afford to operate (both fiscally and operationally), and that provide the interfaces that you need. Set that detail aside, as answers to some of the other questions may decide it for you.

 

How will we store the data? Time series is easy: that typically goes into a RRD. Will there be a need for complex queries against things like NetFlow and other text, such as syslog? If so, there may be a need for an indexing tool. There are many commercial and open source options. Keep in mind that this is one of the more nuanced parts, as answers to this question may change answers to the others, specifically retention, access, and storage location. Data storage is the hidden bane of an analytics system. Disk isn't expensive, but it’s hard to do right, and on budget. Whatever disk space is required, always, always, always add head room. It will be necessary later, or adjustment of the retention policy may be necessary.

 

Encryption comes into play here as well. Typical security practice is to encrypt in flight and at rest, but in many cases this isn’t feasible (think router syslog). Encryption at rest also incurs a fairly heavy cost, both one-time (CPU cycles to encrypt) and perpetual (decryption for access). In many cases, the justification for encryption does not make sense. Exceptions should be documented and risks accepted to provide a documented path on decisions and acceptance of risk by management on the off chance that sensitive information is leaked or exfiltrated.

 

With all of this data, what is the real end goal? Simple: Baseline. Nearly all monitoring and measurement systems provide, at their elemental level, a baseline. Knowing how something operates is fundamental to successful management of any resource, and networks are no exception. By having stored statistical information it becomes significantly easier to identify issues. Functionally, any data collected will likely be useful at some point if it is available and referenced. Having a solid plan as to how the statistical data is dealt with is the foundation of ensuring those deliverables are met.

When it comes to IT, things go wrong from time to time. Servers crash, memory goes bad, power supplies die, files get corrupted, backups get corrupted...there are so many things that can go wrong. When things do go wrong, you work to troubleshoot the issue and end up bringing it all back online as quickly as humanly possible. It feels good, you might even high five or fist bump your co-worker, for the admin, this is a win. However, for the higher-ups, this is where the finger pointing begins.  Have you ever had a manager ask you “So what was the root cause?” or say “Let’s drill down and find the root cause.”

 

 

I have nightmares of having to write after action reports (AARs) on what happened and what the root cause was. In my imagination, the root cause is a nasty monster that wreaks havoc in your data center, the kind of monster that lived under your bed when you were 8 years old, only now it lives in your data center. This monster barely leaves a trace of evidence as to what he did to bring your systems down or corrupt them. This is where a good systems monitoring tool steps in to save the day and help sniff out the root cause. 

 

Three Things to Look for in a Good Root Cause Analysis Tool

A good root cause analysis (RCA) tool can accomplish three things for you, which can provide you with the best track on what the root cause most likely is and how to prevent it in the future. 

  1. A good RCA tool will…be both reactive and predictive. You don’t want a tool that simply points to logs or directories where there might be issues. You want a tool that can describe what happened in detail and point to the location of the issue. You can't begin to track down the issue if you don’t understand what happened and have a clear timeline of events.  Second, the tool can learn patterns of activity within the data center that allow it to become predictive in the future if it sees things going downhill. 
  2. A good RCA tool will…build a baseline and continue to update that baseline as time goes by.  The idea here is for the RCA tool to really understand what looks “normal” to you, what is a normal set of activities and events that take place within your systems. When a consistent and accurate baseline is learned, the RCA tool can get much more accurate as to what a root cause might be when things happen outside of what’s normal. 
  3. A good RCA tool will…sort out what matters, and what doesn’t matter. The last thing you want is a false positive when it comes to root cause analysis. The best tools can accurately measure false positives against real events that can do serious damage to your systems. 

 

Use More Than One Method if Necessary

Letting your RCA tool become a crutch to your team can be problematic. There will be times that an issue is so severe and confusing that it’s sometimes necessary to reach out for help. The best monitoring tools do a good job of bundling log files for export should you need to bring in a vendor support technician. Use the info gathered from logs, plus the RCA tool output and vendor support for those times when critical systems are down hard, and your business is losing money every minute that it’s down.

In Austin this week, so if you are wondering "who brought the rain," well now you know. Here's hoping the sun makes an appearance before I head home.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Tesla shares crash after Elon Musk smokes joint on live web show

Everyone else knows this guy is self-destructing, right in front of our eyes, right?

 

Geology Is Like Augmented Reality For The Planet

A nice reminder that there are amazing stories being told, often right in front of our eyes, if we are aware they exist. Geology is one example, history is another. I make a point to get my children to take time to stop and think about what they are seeing, the history of it all.

 

Microsoft Requires Paid Parental Leave for Subcontractors

I think this is a good first step for Microsoft to force a change that has benefits for everyone. But I’m cautious about this precedent. And why only focus on U.S. companies? Why not address issues globally, in countries where manufacturing of devices happen?

 

Illusion of control: Why the world is full of buttons that don't work

I knew that ‘close door’ button was fake!

 

British Airways Says Customers' Financial Data Was Hacked In 380,000 Transactions

Honestly, I’m a bit impressed that they discovered the breach so quickly. Incidents such as this one can last for a year or more before a company finds and closes the hole.

 

Adding clean energy to the Sahara could make it rain (and not just figuratively)

First time I’ve heard about the idea that clean energy could make it rain in the Sahara and now I want to make this happen.

 

Japan developing ‘pre-crime’ artificial intelligence to predict money laundering and terror attacks

No mention of predicting Godzilla attacks, though.

 

After months of hard work, it's good to sit back and enjoy what you built with your own hands:

 

In this series, we’ve covered some key areas that can help prepare for potential attacks. Preparation is essential. Security policies are essential. Understanding your network and its assets is essential. What happens if a threat is detected? What can we do to monitor for threats? This final blog will look at security monitoring through an understanding of data. Data contains information and exposes actions. Data is the vehicle for compromise, so it is dynamic and must be tracked in real time. Being able to understand data streams is important for identifying and reacting to threats and then applying the correct protection and mitigation methods. Investigation and response depends on an understanding of these data types.

 

Raw data: Sourced directly from a host in the creation of an event log. Some events are pushed from the source using protocols such as syslog and SNMP. Protocols such as SCP, SFTP, FTP, and S3 are typically used to pull event logs from a source system.

 

Parsed Data: Parsing involves matching raw logs to rules or patterns to determine which text strings and variables should be mapped to database fields or attributes. This is a common function of SIEM tools that aggregate raw data streams for a wide variety of telemetry types. Sometimes an agent is used on a source host or an intermediate aggregation point to map raw message data into a vendor specific format such as CEF Syslog or LEEF, which is a first step to normalizing data.

 

Normalized Data: Normalization means transforming variables in the data to a specific category or type for aggregation purposes. For example, taking several attributes that refer to a threat type using various naming conventions and assigning them to a specific attribute as defined in the processing system’s schema. This introduces efficiency when storing and searching data.

 

Full Packet Capture: A capture of all Ethernet/IP activity in contrast to filtered packet capture focusing on a subset of traffic. Important for network forensics or cybersecurity purposes, especially in the case of an advanced persistent threats whose characteristics may be missed in a filtered capture. FPC may be used in static and dynamic analysis systems or detonated in a sandbox for greater understanding.

 

Metadata: Summary information about data. Extracted from FPC to provide a focus on key fields and values not just payloads, which may be encrypted. By looking at the metadata associated with a flow of network traffic, it can be easier to tell the difference between legitimate and bad traffic rather than trying to examine the detailed contents of every data packet. Important metadata include transaction volumes, IP addresses, email addresses, and certificates for TLS and SSL.

 

Flow Data: Flows represent network activity in a session between two hosts by normalizing IP addresses, ports, byte and packet counts, and other data, into flow records. A flow starts when a flow collector detects the first packet that has a unique source IP address, destination IP address, source port, destination port, and other specific protocol options. Often used to look for threats identifiable by their behavior across a flow rather than through atomic actions.

 

DPI Data: Deep packet inspection is stateful packet classification up to the application layer usually carried out as a function of next-generation firewalls and IPS. An expansion of the traditional "5-tuple:" source and destination IP, source and destination port, and protocol. DPI is part of Application Visibility and Control (AVC) systems that extract useful metadata and compare it to the well-known behaviors of applications and protocols to identify anomalies and statistically significant deviations in those behaviors.

 

Statistical Data: Makes use of statistical normalization where the values of columns in a dataset are changed to use a common scale, without distorting differences in the ranges of values or losing information. It is required for some algorithms to model the data correctly such as curve fitting algorithms like the clustering algorithms used in Unsupervised Machine Learning. Statistical data is used to detect user-based threats with user and entity behavior analytics or to identify network threats through network traffic and behavior analysis.

 

Extracted Data: Data retrieved from data sources (like a SIEM database) using specific search patterns to correlate events to build a complete picture of a session or attack. For example, Mapping DNS logs and HTTP logs together to find a threat actor by searching on metadata or IoCs, or tracking the path of email using the Message ID (MID) value.

 

Security Intelligence Enriched Data: Adds information such as reputation and threat scores to metadata to help identify potentially compromised hosts within the network based on a threat analysis report containing malicious IP address or domains, for example Mapping DNS, HTTP, and threat intelligence data together to identify connections to known blacklisted sites.

 

That’s a wrap on this series presenting some cybersecurity fundamentals. Remember, you can’t plan for every threat, and you can’t anticipate the actions of users – both friend and foe. What you can do is be a prepared as possible and reduce the time to detect associated with attacks. You can also put processes and knowledge in place to efficiently respond and remediate. Know your environment and keep up to date with the changes in the threat landscape and how they relate to your use cases. Don’t get complacent – stay prepared.

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

Unfortunately, even with an incredibly fast infrastructure, if application performance is poor, then constituents will more than likely have a bad experience. Proper application performance management (APM) is vital for identifying application performance issues and helping ensure that they maintain an expected level of service. Load testing, synthetic and real-user monitoring, and root-cause analysis are just a few of the key tools that comprise a balanced approach to APM.

 

Understanding the importance of application management raises the question: How can public sector IT professionals ensure that their applications are performing optimally?

 

Here are five key components that should be in every IT pro’s APM toolkit.

 

1. End-user experience monitoring

This should be high on the primary list for public sector IT professionals’ APM efforts. End-user experience monitoring tools collect information on interactions with the application and can help identify any problems that are having a negative impact on the constituents’ experience.

 

Many factors can affect the user experience. As local government bodies move closer to complete cloud adoption, it’s important to find a tool that can monitor both on-premise and hosted applications. It’s also useful to consider a tool that makes provisions for instant changes to network links or external servers if either, or both, are compromising the end-user experience.

 

2. Runtime application architecture discovery

This part of APM looks at the hardware and software components involved in application execution—as well as the paths they use to communicate—to help identify problems and establish their scope.

 

With the complexity of today’s networks, discovering and displaying all the components that contribute to application performance is a substantial task. As such, it is important to select a monitoring tool that provides real-time insight into the application delivery infrastructure.

 

3. User-defined transaction profiling

Understanding user-defined transactions as they navigate the architecture helps IT teams to map out events as they occur across the various components. In addition, it can provide an understanding of where and when events are occurring, and whether they are occurring as efficiently as possible.

 

4. Component deep-dive monitoring

This component of APM provides an in-depth understanding of the components and pathways discovered in previous steps. In a nutshell, the IT management team conducts in-depth monitoring of the resources used by, and events occurring within, the application performance infrastructure.

 

5.Analytics

Finally, as with any IT scenario, having information is one thing; understanding it is another.

 

APM analytics tools help IT teams to:

 

  • Set a performance baseline that provides an understanding of current and historical performance, and set an expectation of what a “normal” application workload entails
  • Quickly identify, pinpoint, and eliminate application performance issues based on historical/baseline data
  • Anticipate and alleviate potential future issues through actionable patterns
  • Identify areas for improvement by mapping infrastructure changes to performance changes

 

As IT environments become more complex, it is equally important to choose a set of APM tools that integrate with one another and with other tools and solutions already in place. Having visibility across all pieces of the application environment is critical to having a complete understanding of application performance and helping ensure “always on” optimization.

 

Find the full article on GovTech Leaders.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

It was LinkedIn that reminded me how long it's been. A sudden flood of "Congratulations on your work anniversary" messages (thank you, by the way, to everyone all the well-wishers) hit my inbox, little popup messages lining the bottom of the dedicated tab in Google Chrome. My first thought was, "has it been that long?" which was followed almost immediately by, "OF COURSE it's been that long."

 

It's similar to the thought parents think about their kids: "Where has the time gone?" followed by a rush of memories, each one distinct and unique, carrying their own particular imprint on our emotions. Every one of the more than 1,460 days is there, if I think hard enough about it. Not all of them have been perfect days. In many moments, I was not at my best. No matter how much fun I have at it, work is still work.

 

But what amazing work it's been.

 

Over the last four years, I have had the joy and privilege to meet so many amazing people, many of whom you've gotten to know along with me: experts in the field who have the ear of thousands; brilliant minds within SolarWinds who are setting the course of our products and inventing, sometimes out of whole cloth, new methods of doing things we had only imagined a few years ago; and people who have transitioned from one to the other (and sometimes back again). But along with those who shine brightly and capture our attention—be it in blogs, videos, webinars, or eBooks—there are incredible people I work with every day who are quietly brilliant, consistently awesome, dependably insightful. These are folks who avoid the spotlight (and a few who actively run from the room if a camera is turned on), but who are passionate and driven and engaged and skilled. And this job has allowed me to work with all of them. To learn from them. And occasionally teach them something, even if it's on the history of Dungeons & Dragons, or how to correctly pronounce "challah."

 

Second only to the people is the work itself. When I told my wife about the job after my first interview—how I'd be writing for publications, blogging, creating video content, and speaking at industry events—she said, "I hope you didn't tell them you'd have done all that for free!" I would have, but I wouldn't have had the chance to do it quite so much. In four years, I've had the chance to create 12 eBooks, write 254 essays or blog posts, and appear in 176 videos. Yes, yes, #humblebrag. I'm celebrating my Head Geekiversary. I think I've earned a little bit of workplace pride.

 

I've had four glorious years to venture out to conventions and user groups and meet people who use SolarWinds products to solve their very real, very important challenges. To help celebrate (and as often as I can, publicly share) their successes and to hopefully be part of resolving any of the challenges they've faced. To marvel at the arc of their careers, whether they were just getting started, somewhere in the middle, or reflecting back after many years.

 

And you know what? After all this time, it's still my dream job. It's still every bit as thrilling to me today when I get to tell people "I'm a Head Geek for SolarWinds" as it was back on that very first day (My name is Leon Adato, and I'm a SolarWinds Head Geek ).

 

So thank you again to everyone who messaged me with "congratulations," both for the kind words and for the chance to stop and take a moment to appreciate just how wonderful it's been.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Have you ever read about TV ratings? Almost every person that watches TV has heard of the ratings produced by the Nielsen Media Research group. These statistics shape how we watch TV and decide whether or not shows are renewed for more episodes in the future.

But, how does Nielsen handle longer programs? How do they track the Super Bowl? Can they really tell how many people were tuned in for the entire event? Or who stopped watching at halftime after the commercials were finished? This particular type of tracking could let advertisers know when they want their commercials to air. And for the network broadcasting the event, it could help them figure out how much to charge during the busiest viewing times.

You might be interested to know that Nielsen tracks their programs in 15-minute increments. They can tell who was tuned in for a particular quarter-hour segment over the course of multiple hours. Nielsen has learned that monitoring the components of a TV show helps them understand the analytics behind the entire program. Understanding microtrends helps them give their customers the most complete picture possible.

Now, let's extend this type of analysis to the applications that we use. In the old days, it was easy to figure out what we needed to monitor. There were one or two servers that ran each application. If we kept an eye on those devices, we could reliably predict the performance of the software and the happiness of the users. Life was simple.

Enter virtualization. Once we started virtualizing the servers that we used to rely on for applications, we gained the ability to move those applications around. Instead of an inoperable server causing our application to be offline, we could move that application to a different system and keep it running. As virtual machines matured, we could increase performance and reliability. We could also make applications run across data centers to provide increased capabilities across geographic locations.

This all leads to the cloud. Now, virtual machines could be moved hither and yon and didn't need to be located on-prem. Instead, we just needed to create new virtual machines to stand up an application. But, even if the hardware was no longer located in our data center, we still needed to monitor what we were doing. If we couldn't monitor the hardware components, we still needed to monitor the virtual machines.

This is where our Nielsen example comes back into play. Nielsen knows how important it is to monitor the components of our infrastructure. So too must we keep an eye on the underlying components of our infrastructure. With virtual machines becoming the key components of our applications today, we must have an idea of how they are being maintained to understand how our applications are performing.

What if the component virtual machines are sitting on opposite sides of a relatively slow link? What if the database tier is in Oregon while the front-end for the application is in Virginia? Would it cause an issue if the replication between virtual machines on the back-end failed for some reason due to misconfiguration and we didn't catch it until they got out of sync? There are a multitude of things we can think about that might keep us up at night figuring out how to monitor virtual machines.

Now, amplify that mess even further with containers. The new vogue is to spin up Docker or Kubernetes containers to provide short-lived services. If you think monitoring component virtual machines is hard today, just wait until those constructs have a short life and are destroyed as fast as they are created. Now, problems can disappear before they're even found. And then they get repeated over and over again.

The key is to monitor both the application and the infrastructure constructs. But it also requires a shift in thinking. You can't just rely on SNMP to save the day yet again. You have to do the research to figure out how best to monitor not only the application software but the way it is contained in your cloud provider or data center. If you don't know what to look for, you might miss the pieces that could be critical to figuring out what's broken or, worse yet, what's causing performance issues without actually causing things to break.

What do The Guru, The Expert, The Maven, The Trailblazer, The Leading Light, The Practice Leader, The Heavyweight, The Opinion Shaper, and The Influencer all have in common? These are all other examples of what to are commonly referred to as “Thought Leaders.” Some may say it’s the latest buzzword by calling experts and influencers "Thought Leaders," but buzzword or not, Thought Leaders have been around way before the buzzword came to use.  Thought Leaders are the go-to expert among industry colleagues and peers. They are the influencers that lead direction within an organization, and sometimes they can be that leading light in your department that innovates new ideas and visions. Thought Leaders are often not in direct line of the management chain, but instead complement management and lead through example to execute vision and goals.

 

Not All Thought Leaders are the Same

The saying “One size does NOT fit all” can also refer to Thought Leadership because not all Thought Leaders are the same. Some Thought Leaders are about cutting-edge trends while others are there to inspire others. However, most Thought Leaders are experts in a field or industry and sometimes have a stance on a particular topic. They look beyond the business agenda and see the overall picture because every industry is constantly evolving. Being able to have insight in the trends and applying them to achieve and deliver results is part of the equation. You must be able to lead others and want to develop them as people not just players on a team.

When someone asks me how they can become a Thought Leader, I tell them this isn’t about you, it’s about others. When you help others by sharing your knowledge and experiences, all that other stuff will naturally come. Thought leadership status isn’t obtained through a single article or social media post on Twitter or LinkedIn. It’s something that you build your experiences and create credibility among your followers or your team at work. Experience takes time. Experience also means not only learning but listening to others. Everyone has different ideas and opinions, and being humble to listen and understand others is a critical part of the learning process. Thought Leaders don’t have all the answers and they are constantly learning themselves.

Credibility does not always mean obtaining all the latest industry certificates. While it can help, it’s not everything because having real life experiences is just as important. Someone that has all certifications in the industry but doesn’t have any applied real-world experiences will probably not get the same credibility as someone with 15+ years’ experience and fewer certifications.

Being the “Go To” person means defining trends or topics and showing your followers how they can take that knowledge to go farther with it. Once you are there it doesn’t stop either because you will need to continue to be involved and learning, otherwise your followers will eventually stop following you for guidance and that “vision.”

It’s About Others

I still get shocked sometimes when people refer to me as a Thought Leader. The reason why is because I didn’t set out to become a thought leader. What I wanted to do and still want to do is make a difference in the world and company I work for and to my coworkers and peers. I wanted to help others be successful by sharing any knowledge or skills that I may have. My hope was that by sharing my experiences others can be empowered to better themselves. Early on in my IT career, a manager gave me the best advice: sharing your knowledge will make you more valuable and it will motivate you to learn more. I have since kept that advice and use it daily. 

Back from VMworld and it's hotter here at home than in Las Vegas. I've no idea how that is possible. VMworld was a wonderful show, and it's always good to see my #vFamily.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

AWS announces Amazon RDS on VMware

There were lots of announcements last week at VMworld, but I found this one to be the most interesting. AWS and VMware are bringing the cloud to your data center. I expect Microsoft to follow suit. It would appear that all three companies are working together to control and maintain infrastructure for the entire world.

 

Earthquake early-warning system successfully sent alarm before temblor felt in Pasadena

I applaud the effort here, and hope that these systems will allow for more advanced warnings in later versions. Because alerting me 3 seconds before an earthquake strikes is not enough of a warning.

 

Two seconds to take a bite out of mobile bank fraud with Artificial Intelligence

OTOH, alerting within two seconds seems reasonable for detecting fraud, because fraud usually doesn’t involve a building falling on top of me. And this is a great example of how AI research can make the world a better place.

 

Video games that allow in-game purchases to carry Pegi warning

I think this is a great first step, but more work is needed. I’d like to see video games publish the amount of time and/or money necessary to complete the game.

 

World's Oldest Customer Complaint Goes Viral

After reading this, I do not recommend shopping at Ea-nasir’s store. Avoid.

 

Major Quantum Computing Advance Made Obsolete by Teenager

Ignore the clickbait title. The kid didn’t make anything obsolete. But he did stumble across a new model for recommendations. The likely result is that when quantum computing finally lands, researchers will be able to focus on solving real issues, like global warming, and not worry about what movies a person wants to rent next.

 

How to Roll a Strong Password with 20-Sided Dice and Fandom-Inspired Wordlists

The next time you need to rotate a password, start here.

 

Adding this to my conference survival guide:

 

Thomas LaRock and Karen Lopez fighting an octomonster to protect servers.

In my previous posts about Building a Culture of Data Protection (Overview, Development, Features, Expectations) I covered the background of building a culture.  In this post, I'll be going over the Tools, People, and Roles I recommend to successful organizations.

 

Tools

 

Given the volume and complexity of modern data systems, teams have to use requirements, design, development, test, and deployment tools to adequately protect data.  Gone are the days of "I'll write a quick script to do this; it's faster." Sure, scripts are important for automation and establishing repeatable processes.  But if you find yourself opening up your favorite text editor to do design of a database, you are starting in the wrong place. I recommend these as the minimum tool stack for data protection:

 

  • Data modeling tools for data requirements, design, development, and deployment
  • Security and vulnerability checking tools
  • Database comparison tools (may come with your data modeling tool)
  • Data comparison tools for monitoring changes to reference data as well as pre-test and post-test data states
  • Data movement and auditing tools for tracking what people are doing with data
  • Log protection tools to help ensure no one is messing with audit logs
  • Permissions auditing, including changes to existing permissions
  • Anonymous reporting tools for projects and individuals not following data protection policies

 

These tools could be locally hosted or provided as services you run against your environments.  There are many more tools and services a shop should have deployed; I've just covered the ones that I expect to see everywhere.

 

People

The people on your team should be trained in best practices for data security and privacy.  This should be regular training, since compliance and legal issues change rapidly. People should be tested on these as well, and I don't mean just during training.

 

When I worked in high-security locations, we were often tested for compliance with physical security while we went about our regular jobs. They'd send people not wearing badges to our offices asking project questions.  They would leave pages marked SECRET on the copier and printers.  They would fiddle with our desktops to see if we noticed extra equipment.  I recommend to management that they do this with virtual things like data as well.

 

As I covered in my first post, I believe people should be measured and rewarded based on their data protection actions. If there are no incentives for doing the hard stuff, it will always get pushed to "later" on task lists.

 

Roles

I'm a bit biased, but I recommend that every project have a data architect, or a portion of one.  This would be a person who is responsible for managing and reviewing data models, is an expert in the data domains being used, is rewarded for ensuring data protection requirements are validated and implemented, and is given a strong governance role for dealing for non-compliance issues.

 

Teams should also have a development DBA in order to choose the right data protection feature for ensuring data security and privacy requirements are implemented in the best way given the costs, benefits and risks associated with each option.

 

Developers should have a designated data protection contact. This could be the project lead or any developer with a security-driven mindset. This person would work with the data architect and DBA to ensure data protection is given the proper level of attention throughout the process.

 

Quality assurance teams should also have a data protection point of contact to ensure test plans adequately test security and privacy requirements.

 

All of these roles would work with enterprise security and compliance.  While every team member is responsible for data protection, designating specific individuals with these roles ensures that proper attention is given to data.

 

Finally…

Given the number of data breaches reported these days, it's clear to me that our industry has not been giving proper attention to data protection.  In this five post series, I couldn't possibly cover all the things that need to be considered let alone accomplished.  I hope it has helped your think about what your teams are doing now and how they can be better prepared to love their data better than they have in the past.

 

And speaking of preparation, I'm going to leave a plug here for my up-coming THWACKcamp session on the Seven Samurai of SQL Server Data Protection.  In this session, Thomas LaRock and I go over seven features in Windows and SQL Server that you should be using.  Don't worry if you aren't lucky enough to use SQL Server; there's plenty of data protection goodness for everyone. Plus a bit of snark, as usual.

 

I also wrote an eBook for SolarWinds called Ten Ways We Can Steal Your Data with more tips about loving your data.

 

See you at THWACKcamp!

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

With 2018 two-thirds over, federal agencies should be well into checking off the various cloud migration activities outlined in the American Technology Council’s Federal IT Modernization Report. Low-risk cloud migration projects were given clearance to commence as of April 1, and security measures and risk assessments will take place throughout the rest of the year. 

 

Agencies must remain aggressive with their cloud migration efforts yet continue to enforce and report on security measures while undergoing a significant transition. Adopting a pair of policies that take traditional monitoring a step further can help them continue operating efficiently.

 

Deep Cloud Monitoring

 

As our recent SolarWinds IT Trends survey indicates, hybrid IT and multicloud environments are becoming increasingly prevalent. Agencies are keeping some infrastructure and applications onsite while turning to different cloud providers for other types of workloads. This trend will likely continue as agencies modernize their IT systems and become more dependent on federally specific implementations of commercial cloud technologies, as called for in the ATC report.

 

A multicloud and hybrid IT approach can create challenges. For example, “blind spots” can creep in as data passes back and forth between environments, making it difficult for federal IT professionals to keep track of data in these hybrid environments. In addition, trying to manage all the data while ensuring adequate controls are in place as it moves between cloud providers and agencies can be an enormously complex and challenging operation. It can be difficult to detect anomalies or flag potential problems.

 

To address these challenges, administrators should consider investing in platforms and strategies that provide deep network monitoring across both on-premise and cloud environments. They should have the same level of awareness and visibility into data that resides on AWS or Microsoft servers as they would on their own in-house network.

 

Deep Email Monitoring

 

In addition to focusing on overall network modernization, the ATC report specifically calls out the need for shared services. In particular, the report cites moving toward cloud-based email and collaboration tools as agencies attempt to replace duplicative legacy IT systems.

 

The Air Force is leading the charge here with its transition to Microsoft Office 365, but there are inherent dangers in even a seemingly simple migration to cloud email. Witness the damage done by recent Gmail, Yahoo!, and Office 365 email outages, which caused hours of lost productivity and potentially cost organizations hundreds of millions of dollars. Lost email can also result in missed communications, which can be especially worrisome if those messages contain mission-critical and time-sensitive information.

 

Agencies should consider implementing procedures that allow their teams to monitor email paths, system state, and availability just as closely as they would any other applications operating in hybrid IT environments. Emails take different paths as they move between source and destination. Managers should closely monitor those to help ensure that the information moves between hosted providers and on-premise networks without fail. This practice can help IT professionals better understand and monitor email service quality and performance to help ensure continuous uptime.

 

The fact that there is now a clear and direct map to modern, agile, and efficient infrastructures does not necessarily make the journey any easier. Thorough strategies aimed at cloud and application or service (like email) monitoring can help agencies navigate potential hazards and help ensure seamless and safe modernization of federal information systems.

 

Find the full article on GCN.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

As anyone that has run a network of any size has surely experienced, with one alert, there is typically (but not always) a deeper issue that may or may not generate further alarms. An often overlooked correlation is that between a security event caught by a Network Security Monitor (NSM) and that of a network or service monitoring system. In certain cases, a security event will be noticed first by a network or service alarm. Due to the nature of many modern attacks, either volumetric or targeted, the goal is typically to offline a given resource. This “offlining,” usually labeled a denial of service (DoS), has a goal that is very easy to understand: make the service go away.

 

To understand how and why correlating these events is important, we need to understand what they are. In this case, the focus will be on two different attack types and how they manifest. There are significant similarities. Knowing the difference can save time and effort in working triage, whether you're dealing with a large or small outages/events.

 

Keeping that in mind, the obvious goals are:

 

1. Rooting out the cause

2. Mitigating the outage

3. Understanding the event to prevent future problems

 

For the purposes of this post, the two most common issues will be described: volumetric and targeted attacks.

 

Volumetric

This one is the most common and it gets the most press due to the sheer scope of things it can damage. At a high level, this is just a traffic flood. It’s typically nothing fancy, just a lot of traffic generated in one way or another (there are a myriad of different mechanisms for creating unwanted traffic flows), typically coming from either compromised hosts or misconfigured services like SNMP, DNS recursion, or other common protocols. The destination, or target, of the traffic is a host or set of hosts that the offender wants to knock offline.

 

Targeted

This is far more stealthy. It’s more of a scalpel, where a volumetric flood is a machete. A targeted attack is typically used to gain access to a specific service or host. It is less like a flood and more like a specific exploit pointed at a service. This type of attack usually has a different goal, gain access for recon and information collecting. Sometimes a volumetric attack is simply a smoke screen for a targeted attack. This can be very difficult to root out.

 

Given what we know about these two kinds of attacks, how can we utilize all of our data to better (and more quickly) triage the damage? Easily, actually. Knowing that an attack is occurring is fairly easy to determine in the case of volumetric attacks: traffic spikes, flatlined circuits, service degradation. In a strictly network engineering world, the problem would manifest and best case NetFlow data would be consulted in short order. Even with that particular dataset, it may not be obvious to a network engineer that there is an attack occurring. It may just appear as a large amount of UDP traffic from different sources. Given traffic levels, a single 10G connected host can drown out a 1G connected site. This type of attack can also manifest in odd ways, especially if there are link speed differences along a given path. This can look like a link failure or a latency issue when in reality it is a volumetric attack.  However, if also utilizing a tuned and maintained NSM of some kind, the traffic patterns should be readily identified as a flood, and the traffic pattern can be filtered more quickly, either by the site or the upstream ISP.

 

Targeted attacks will be very different, especially when performed on their own. This is where an NSM is critical. With attempts to compromise actual infrastructure hardware like routers and switches on a significant uptick, knowing the typical traffic patterns for your network is key. If a piece of your critical infrastructure is targeted, and it is inside of your security perimeter, your NSM should catch that and alert you to it. This is especially important in the case of your security equipment. Having your network tapped before the filtering device can greatly aid in seeing traffic with destinations of your actual perimeter. Given that there are cases of firewalls being compromised, this is a real threat. If and when that occurs, it may appear as a high load on a device, a memory allocation increase, or perhaps a set of traffic spikes, most of which a network engineer will not be concerned with as long as it is not affecting service. However, understanding the traffic patterns that led to that could help uncover a far less pleasant cause.

 

Most of these occurrences are somewhat rare, nevertheless, it is a very good habit to check all data sources when something out of the baseline occurs on a given network. Perhaps more importantly, there is no substitute for good collaboration. Having a strong, positive and ongoing working relationship between security professionals and networking engineers is a key element to making any of these occurrences less painful. In many cases of small- and medium-sized environments, these people are one and the same. But when they aren’t, collaboration at a professional level is as important and useful as the cross referencing of data sources.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.