1 2 3 Previous Next

Geek Speak

2,762 posts

Home after a couple weeks on the road between Cisco Live! and Data Grillen. My next event is Microsoft Inspire, and if you're attending, please stop by the booth so we can talk data.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Meds prescriptions for 78,000 patients left in a database with no password

This is the second recent breach involving a MongoDB and underscores the need for consequences to those who continue to practice poor security methods. Until we see stiffer penalties to the individuals involved, you can expect those rockstar MongoDB dev teams to get new jobs and repeat all the same mistakes.

 

Florida City Pays $600,000 Ransom to Save Computer Records

Never, ever pay the ransom. There's no guarantee you get your files, and you become a target for others (because now they know you will pay). Also? Time to evaluate your security response plan regarding ransomware, especially if you're running older software. It's just a matter of time before Anton in Accounting clicks on that phishing link.

 

AMCA Files for Bankruptcy Following Data Breach

Nice reminder for everyone that the result of a breach is your company goes out of business. Life comes at you fast.

 

Machine Learning Doesn’t Introduce Unfairness—It Reveals It

Great post. Machine learning algorithms are not fair, because the data they use has inherent bias. And the machines are good at uncovering that bias. In some ways, we humans have built these code machines, and the result is we are looking at ourselves in the mirror.

 

Microsoft bans Slack and discourages AWS and Google Docs use internally

Because the free version of Slack doesn't meet Microsoft security standards. Maybe that should have been the headline instead of the clickbait trying to portray Microsoft as evil.

 

Cyberattack on Border Patrol subcontractor worse than previously reported

Your security is only as strong as your weakest vendor partner. Your security protocols could be the best in the world but it won't matter if you allow a partner access and they cause the breach.

 

Nashville is banning electric scooters after a man was killed

This is absurd. The scooters didn't do anything wrong. They should not be penalized for the actions of a drunk person making bad choices. I look forward to the mayor banning cars the next time a drunk driver kills someone in downtown Nashville.

 

Words can not describe the glory that is Data Grillen:

 

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Jim Hansen. Jim offers three suggestions to help with security, including using automation and baselines. I agree with Jim that one of the keys is to try to keep security in the background, and I’d add it’s also important to be vigilant and stick to it.

 

Government networks will continue to be an appealing target for both foreign and domestic adversaries. Network administrators must find ways to keep the wolves at bay while still providing uninterrupted and seamless access to those who need it. Here are three things they can do to help maintain this delicate balance.

 

1. Gain Visibility and Establish a Baseline

 

Agency network admins must assess how many devices are connected to their networks and who’s using those devices. This information can help establish visibility into the scope of activity taking place, allow teams to expose shadow IT resources, and root out unauthorized devices and users. Administrators may also wish to consider whether or not to allow a number of those devices to continue to operate.

 

Then, teams can gain a baseline understanding of what’s considered normal and monitor from there. They can set up alerts to notify them of unauthorized devices or suspicious network activity outside the realm of normal behavior.

 

All this monitoring can be done in the background, without interrupting user workflows. The only time users might get notified is if their device or activity is raising a red flag.

 

2. Automate Security Processes

 

Many network vulnerabilities are caused by human error or malicious insiders. Government networks comprise many different users, devices, and locations, and it can be difficult for administrators to detect when something as simple as a network configuration error occurs, particularly if they’re relying on manual network monitoring processes.

 

Administrators should create policies outlining approval levels and change management processes so network configuration changes aren’t made without approval and supporting documentation.

 

They can also employ an automated system running in the background to support these policies and track unauthorized or erroneous configuration changes. The system can scan for unauthorized or inconsistent configuration changes falling outside the norm. It can also look for non-compliant devices, failed backups, and even policy violations.

 

When a problem arises, the system can automatically correct the issue while the IT administrator surgically targets the problem. There’s no need to perform a large-scale network shutdown.

 

Automated and continuous monitoring for government IT can go well beyond configuration management, of course. Agencies can use automated systems to monitor user logs and events for compliance with agency security policies. They can also track user devices and automatically enforce device policies to help ensure no rogue devices are using the network.

 

Forensic data captured by the automated system can help trace the incident back to the source and directly address the problem. Through artificial intelligence and machine learning, the system can then use the data to learn about what happened and apply that knowledge to better mitigate future incidents.

 

3. Lock down security without compromising productivity

 

The systems and strategies outlined above can maintain network security for government agencies without interfering with workers’ productivity. Only when and if something comes up is a user affected, and even then, the response will likely be as unobtrusive as simply denying network access to that person.

 

In the past, that kind of environment has come with a cost. IT professionals have had to make a choice between providing users with unfettered access to the tools and information they need to work or tightening security to the point of restriction.

 

Fortunately, that approach is no longer necessary. Today, federal IT administrators can put security at the forefront by making it work for them in the background. They can let the workers work—and keep the hackers at bay.

 

Find the full article on GCN.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Are you excited for this post? I certainly know I am!

 

If this is the first article you're seeing of mine and you haven't read my prior article, "Why Businesses Don't Want Machine Learning or Artificial Intelligence," I advise you go back and read it as a primer for some of the things I'm about to share here. You might get “edumicated,” or you might laugh. Either way, welcome.

 

Well, officially hello returning readers. And welcome to those who found their way here due to the power of SEO and the media jumping all over any of the applicable buzzwords in here.

 

The future is now. We’re literally living in the future.

 

Tesla Self-Driving, look ma no hands!

Image: Tesla/YouTube

 

That's the word from the press rags if you've seen the video of Tesla running on full autopilot and doing a complicated series of commute/drives cited in the article, "New Video Shows Tesla's Self-Driving Technology at Work." And you would be right. Indeed, the future IS now. Well, kind of. I mean it's already a few seconds from when I wrote the last few words, so I'm clearly traveling through time...

 

But technology and novelty like driving in traffic are a little more nuanced than this.

 

"But I want this technology right now. Look, it works. Stop your naysaying. You hate Tesla, blah blah blah, and so on."

 

That feels very much like Apple/Android fanboy or fan-hate when someone says anything negative about a thing they want/like/love. Nobody wants this more than me. (Well, except for the robots ready to kill us.)

 

Are There Really Self-Driving Teslas?

 

You might be surprised to know that Tesla has advanced significantly in the past few years. I know, imagine that—so much evolution! But just as we reward our robot overlords for stopping at a stop sign, they're only as good as the information we feed them.

 

We can put the mistakes of 2016 behind us with tragedies like this: "Tesla self-driving car fails to detect truck in a fatal crash."

 

Fortunately, Tesla continues to improve and get better and we'll be ready to be fully autonomous with self-driving vehicles roaming the roads without flaw or problem by 2019. (Swirly lines, and flashback to just a few months into 2019: Tesla didn't fix an Autopilot problem for three years, and now another person is dead.)

 

Is the Tesla Autopilot Safe?

 

Around this time, as I was continuing my study, research, and analysis of this and problems like it, I came across the findings of @greentheonly.

 

Truck's are misunderstood and prefer to be identified as Overpasses

Image: https://twitter.com/greentheonly/status/1105996445251977217

 

And rightly so, we can call this an anomaly. This doesn't happen that frequently. It's not a big deal, except for when it does happen. Not only just when, but the fact that it does happen… whether it's seeing the undercarriage of a truck and interpreting it as an overpass and thus you can "safely pass" under it, shearing the top off of the cab, or seeing a truck to the side of you and interprets the space beneath the truck as a safe “lane” to change into.

 

But hey, that's Autopilot. We're not going to use that anymore until it's solid, refined, and safe. Then the AI and ML can't kill me. I'll be making all the decisions.

 

 

ELDA Taking Control!ELDA may be the death of someone!

Image: https://twitter.com/KevinTesla3/status/1134554915941101569

 

If you recall in the last article, I mentioned the correlation of robots, Jamba Juice, and plasma pumps. Do you ever wonder why Boston Dynamics doesn't have robot police officers like the ED-209 working on major metro streets, providing additional support akin to RoboCop? (I mean, other than the fact that they're barely allowed to use machine learning and computer vision. But I digress.)

 

It’s because they're not ready yet. They don't have things fully baked. They need a better handle on the number of “faults” that can occur, not to mention liability.

 

Is Autonomous Driving Safe?

 

Does this mean, though, that we should stop where we are and no one should use any kind of autonomous driving function? (Well, partially...) The truth is, there are, on average, 1.35 million road traffic deaths each year. Yes, that figure is worldwide, but that figure is also insanely staggering. If we had autonomous vehicles, we could greatly diminish the number of accidents we experience on the roads, which could bring those death tolls down significantly.

 

And someday, we will get there. The vehicles’ intelligence is getting better every day. They make mistakes, sometimes not so bad—"Oh, I didn't detect a goose on the road as an object/living thing and ended up running it over." Or, "The road was damaged in an area, so we didn't detect that was a changing lane/crossing lane/fill-in-the-blank of something else."

 

The greatest strength of autonomous vehicles like Tesla, Waymo, and others is their telemetry. But their greatest weakness is their reliance solely on some points of telemetry.

 

Can Self-Driving Cars Ever Be Safe?

 

In present-day 2019, we rely on vehicles with eight cameras (hey, that's more eyes than us humans have!), some LIDAR data, and a wealth of knowledge of what we should do in conditions on roadways. Some complaints I've shared with various engineers of some of these firms are the limitations of these characteristics, mainly that the cameras are fixed, unlike our eyes.

 

Landslide!

Video: https://youtu.be/SwfKeFA9iEo?t=22

 

So, if we should encounter a rockslide, a landslide, something falling from above (tree, plane, meteorite, car-sized piece of mountain, etc.) we'll be at the will of the vehicle and its ability to identify and detect this. This won't be that big of a deal, though. We'll encounter a few deaths here or there, it'll make the press, and they'll quietly cover it up or claim to fix it in the next bugfix released over the air to the vehicles (unlike the aforementioned problem that went unsolved for three years).

 

The second-biggest problem we face is, just like us, these vehicles are (for the most part) working in a vacuum. A good and proper future of self-driving autonomy will involve the vehicles communicating with each other, street lights, traffic cameras, environmental data, and telemetry from towers, roads, and other sensors. Rerouting traffic around issues will become commonplace. When an ambulance is rushing someone to a hospital, it can clear the roadways in favor of emergency vehicles. Imagine if buses ran on the roads efficiently. The same could be true of vehicles.

 

That's not a 2020 vision. It’s maybe a 2035 or 2050 vision in some cities. But this is a future that can be well seen ahead of us.

 

The Future of Tesla and Self-Driving Vehicles

 

It may seem like I’m critical of Tesla and their Autopilot programs. That’s because I am. I see them jumping before they crawl. I've seen deaths rack up, and I've seen many VERY close calls. It's all in the effort of better learning and training. But it's on the backs of consumers and on the graves of the end users who've been made to believe these vehicles are tomorrow's self-drivers. In reality, they’re in an Alpha state with the sheer limited amount of telemetry available.

 

Will I use Autopilot? Yeah, probably... and definitely in an effort of discovering and troubleshooting problems because I'm the kind of geek who likes to understand things. I don't have a Tesla yet, but that's only a matter of time.

 

So, I obviously cannot tell you what to do, with your space-age vehicle driving you fast forward into the future, but I will advise you to be cautious. I've had to turn ELDA off on my Chevy Bolt as it has been steering me into traffic, and that effectively has little to nothing I would consider "intelligent" in the grand scheme of things.

 

At the start of this article, I asked if you were as excited as I was. I'm not going to ask if you're as terrified as I am! I will ask you to be cautious. Cautious as a driver, cautious as a road-warrior. The future is close, so let's see you live to see it with the rest of us. Because I look forward to a day where the number 10 cause of death worldwide is a thing of the past.

 

Thank you and be safe out there!

We put in all this work to create a fully customized toolchain tailored to your situation. Great! I bet you're happy you can change small bits and pieces of your toolchain when circumstances change, and the Periodic Table of DevOps tools has become a staple when looking for alternatives. You're probably still fighting the big vendors that want to offer you a one-size-fits-all solution.

 

But there's a high chance you're drowning in tool number 9,001. The tool sprawl, like VM sprawl after we started using virtualization 10-15 years ago, is real. In this post, we'll look at managing the tools managing your infrastructure.

 

A toolchain is supposed to make it easier for you to manage changes, automate workflows, and monitor your infrastructure, but isn't it just shifting the problem from one system to another? How do we prevent spending disproportionate amounts of time managing the toolchain, keeping us from more important value-add work?

 

The key here isn’t just to look at your infrastructure and workflows with a pair of LEAN glasses, but also to look at the toolchain with the same perspective. Properly integrating tools and selecting the right foundation makes all the difference.

 

Integration

One key aspect of managing tool sprawl is properly integrating all the tools. With each handover between tools, there's a chance of manual work, errors, and friction. But what do we mean by properly integrating? Simply put: no manual work between steps, and as little as possible custom code or scripting.

 

In other words, tools integrating natively with each other take less custom work. The integration comes ready out of the box, and are an integral, vendor-supported part of the product. This means you don't have to do (much) work to integrate it with different tools. Selecting tools that natively integrate is a great way to reduce the negative side effects of tool sprawl.

 

Fit for Purpose to Prevent DIY Solutions

Some automation solutions are very flexible and customizable, like Microsoft PowerShell. It's widely adopted, very flexible, highly customizable, and one of the most powerful tools in an admin's tool chest, but getting started with it can be daunting. This flexibility leads to some complexity, and often there's no single way of accomplishing things. This means you need to put in the work to make PowerShell do what you want, instead of a vendor providing workflow automation for you. Sometimes, it's worthwhile to use a fit-for-purpose automation tool (like Jenkins, SonarQube, or Terraform) that has automated the workflows for you instead of a great task automation scripting language. Developing and maintaining your own scripts for task automation and configuration management can easily take up a large chunk of your workweek, and some of that work has little to do with the automation effort you set out to accomplish in the first place. Outsourcing that responsibility to the company (or open-source project) that created the high-level automation logic (Terraform by HashiCorp is a great example of this) makes sense, saving your time for job-specific work adding actual value to your organization.

 

If you set out to use a generic scripting language, choose one that fits your technical stack (like PowerShell if you're running Microsoft stack) and technical pillar (such as Terraform for infrastructure-as-code automation).

Crisis Averted. Here's Another.

So, your sprawl crisis has been averted. Awesome; now we have the right balance of tools doing what they're supposed to do. But there's a new potential hazard around the corner: lock-in.

 

In the next and final post in this series, we'll have a look at the different forms of lock-in: vendor lock-in, competition lock-in, and commercial viability. 

 

Home from San Diego and Cisco Live! for a few days, just enough time to spend Father's Day with friends and family. Now I'm back on the road, heading to Lingen, Germany, for Data Grillen. That's right, an event dedicated to data, bratwurst, and beer. I may never come home.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

This Austrian company is betting you’ll take a flying drone taxi. Really.

I *might* be willing to take one, but I’d like to know more about how they plan to control air traffic flow over city streets.

 

How Self-Driving Cars Could Disrupt the Airline Industry

Looks like everyone is going after airline passengers. If airlines don’t up their level of customer service, autonomous cars and flying taxis could put a dent in airline revenue.

 

Salesforce to Buy Tableau for $15.3 Billion in Analytics Push

First, Google bought Looker, then Salesforce buys Tableau. That leaves AWS looking for a partner as everyone tries to keep pace with Microsoft and PowerBI.

 

Genius Claims Google Stole Lyrics Embedded With Secret Morse Code

After decades of users stealing content from websites found with a Google search, Google decides to start stealing content for themselves.

 

Your Smartphone Is Spying (If You’re a Spanish Soccer Fan)

Something tells me this is likely not the only app that is spying on users. And while La Liga was doing so to catch pirated broadcasts, this is a situation where two wrongs don’t make a right.

 

Domino’s will start robot pizza deliveries in Houston this year

How much do you tip a robot?

 

An Amazing Job Interview

I love this approach, and I want you to consider using it as well, no matter which side of the interview you are sitting.

 

Sorry to all the other dads out there, but it's game over, and I've won:

 

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Brandon Shopp about choosing a database tool to help improve application performance and support advanced troubleshooting. Given the importance of databases to application performance, I’m surprised this topic doesn’t get more attention.

 

The database is at the heart of every application; when applications are having performance issues, there’s a good chance the database is somehow involved. The greater the depth of insight a federal IT pro has into its performance, the more opportunity to enhance application performance.

 

There’s a dramatic difference between simply collecting data and correlating and analyzing that data to get actionable results. For example, often, federal IT teams collect highly granular information that’s difficult to analyze; others may collect a vast amount of information, making correlation and analysis too time-consuming.

 

The key is to unlock a depth of information in the right context, so you can enhance database performance and, in turn, optimize application performance.

 

The following are examples of “non-negotiables” when collecting and analyzing database performance information.

 

Insight Across the Entire Environment

 

One of the most important factors in ensuring you’re collecting all necessary data is to choose a toolset providing visibility across all environments, from on-premises, to virtualized, to the cloud, and any combination thereof. No federal IT pro can completely understand or optimize database performance with only a subset of information.

 

Tuning and Indexing Data

 

Enhancing database performance often requires significant manual effort. That’s why it’s critical to find a tool that can tell you where to focus the federal IT team’s tuning and indexing efforts to optimize performance and simultaneously reduce manual processes.

 

Let’s take database query speed as an example. Slow SQL queries can easily result in slow application performance. A quality database performance analyzer presents federal IT pros with a single-pane-of-glass view of detailed query profile data across databases. This guides the team toward critical tuning data while reducing the time spent correlating information across systems. What about indexing? A good tool will identify and validate index opportunities by monitoring workload usage patterns, and then make recommendations regarding where a new index can help optimize inefficiencies.

 

Historical Baselining

 

Federal IT pros must be able to compare performance levels—for example, expected performance with abnormal performance—by establishing historic baselines of database performance to look at performance at the same time on the same day last week, and the week befor, etc.

 

This is the key to database anomaly detection. With this information, it’s easier to identify a slight variation—the smallest anomaly—before it becomes a larger problem.

 

Remember, every department, group, or function within your agency relies on a database in some way or another, particularly to drive application performance. Having a complete database performance tool enables agencies to stop finger-pointing and pivot from being reactive to being proactive; from having high-level information to having deeper, more efficient insight.

 

This level of insight can help optimize database performance and make users happier across the board.

 

Find the full article on our partner DLT’s blog Technically Speaking.

 

  The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

This is a continuation of one of my previous blog posts, Which Infrastructure Monitoring Tools Are Right for You? On-Premises vs. SaaS, where I talked about the considerations and benefits of running your monitoring platform in either location. I touched on the two typical licensing models geared either towards a CapEx or an OpEx purchase. CapEx is something you buy and own, or a perpetual license. OpEx is usually a monthly spend that can vary month-to-month, like a rental license. Think of it like buying Microsoft Office off-the-shelf (who does that anymore?) where you own the product, or you buy Office 365 and pay month-by-month for the services that you consume.

 

Let’s Break This Down

As mentioned, there are the two purchasing options. But within those, there are different ways monitoring products are licensed, which could ultimately sway your decision on which product to choose. If your purchasing decisions are ruled by the powers that be, then cost is likely one of the top considerations when choosing a monitoring product.

 

So what license models are there?

 

Per Device Licensing

This is one of the most common operating models for a monitoring solution. What you need to look at closely, though, is the definition of a device. I have seen tools classify a network port on a switch as an individual device, and I have seen other tools classify the entire switch as a device. You can win big with this model if something like a VMware vCenter server or an IPMI management tool is classed as one device. You can effectively monitor all the devices managed by the management tool for the cost of a single device license.

 

Amount of Data Ingested

This model seems to apply more to SaaS-based security information and event management (SIEM) solutions, but may apply to other products. Essentially, you’re billed per GB of data ingested into the solution. The headline cost for 1GB of data may look great, but once you start pumping hundreds of GBs of data into the solution daily, the costs add up very quickly. If you can, evaluate log churn in terms of data generated before looking at a solution licensed like this.

 

Pay Your Money, Monitor Whatever You Like

Predominantly used for on-premises monitoring solutions deployments, you pay a flat fee and you can monitor as much or as little as you like. The initial buy-in price may be high, but if you compare it to the other licensing models, it may work out cheaper in the long run. I typically find established players in the monitoring solution game have on-premises and SaaS versions of their products. There will be a point where paying the upfront cost for the on-premises perpetual license is cheaper than paying for the monthly rental SaaS equivalent. It’s all about doing your homework on that one.

 

Conclusion

I may be preaching to the choir here, but I hope there are some useful snippets of info you can take away and apply to your next monitoring solution evaluation. As with any project, planning is key. The more info you have available to help with that planning, the more successful you will be.

Information Security artwork, modified from Pete Linforth at Pixabay.

In post #3 of this information security series, let's cover one of the essential components in an organization's defense strategy: their approach to patching systems.

Everywhere an Attack

When did you NOT see a ransomware attack or security breach story in the news? And when was weak patching not cited as one of the reasons making the attack possible? If only the organization had applied those security patches from a couple of years ago…

 

Baltimore City is one of the most recent ransomware attacks in the news. For the past month, RobbinHood ransomware, a variant of NSA's EternalBlue, crippled many of the city's online processes like email, paying water bills, and paying parking tickets. More than four weeks later, city IT workers are still working diligently to restore operations and services.

 

How could Baltimore have protected itself? Could it have applied those 2017 WannaCry exploit patches from Microsoft? Sure, but it's never just one series of patches that weren’t applied.

 

Important Questions

How often do you patch? How do you find out about vulnerabilities? What processes do you have in place for testing patches?

 

How and when an organization patches systems tells you a lot about how much they value security and their commitment to designing systems and processes for those routine maintenance tasks critical to an organization's overall security posture. Are they proactive or reactive? Patching (or lack thereof) can shine a light on systematic problems.

 

If an organization has taken the time to implement systems and processes for security patch management and applying security updates, the higher-visibility areas of their information security posture (like identity management, auditing, and change management) are likely also in order. If two-year-old patches weren't applied to public-facing web servers, you can only guess what other information security best practices have been neglected.

 

If you've ever worked in Ops, you know that a reprieve from patching systems and devices will probably never come. Applying patches and security updates is a lot like doing laundry. It's hard work and no matter how much you try, you’ll never catch up or finish. When you let it slide for a while, the catch-up process is more time-consuming and arduous than if you had stayed vigilant in the first place. However, unlike the risk of having to wear dirty clothes, the threat of not patching your systems is a serious security problem with real-world consequences.

 

Accountability-Driven Compliance

We all do better jobs when we’re accountable to someone, whether it be an outside organization or even yourself. If your mother-in-law continuously checked in to make sure you never fell behind on your laundry, you would be far more dutiful on keeping up with washing your clothes. If laundry is a constant challenge for you (like me), you would probably design a system to make sure you could keep up with laundry duties.

 

In the federal space, continuous monitoring is a tenet of government security initiatives like the Federal Risk and Authorization Program (FedRAMP) and Risk Management Framework (RMF). These initiatives centralize accountability and consequences while driving continuous patching in organizations. FedRAMP and RMF assess an organization's risk. Because unpatched systems are inherently risky, failure to comply can result in revoking an organization's Authority to Operate (ATO) or shutting down their network.

 

How can you tell if systems are vulnerable? Most likely, you run vulnerability scans that leverage Common Vulnerability and Exploits (CVE) entries. This CVE list, an "international cybersecurity community effort," drives most security vulnerability assessment tools in on-premises and off-premises products like Nessus and Amazon Inspector. In addition, software vendors leverage the CVE lists to develop security patches for their products. Vulnerability assessment and continuous scanning end up driving an organization's continuous patching stance.

 

Vendor Questions

FedRAMP also assesses security and accredits secure cloud offerings and services. This initiative allows federal organizations to tap into the "as a Service" world and let outside organizations and third-party vendors assume the security, operations, and maintenances of applications, of which patching and applying updates are an important component.

When selecting vendors or "as a service" providers, you could assess part of their commitment to security by inquiring about software component versions. How often do they issue security updates? Can you apply minor software updates out-of-cycle without affecting support?

 

If a software vendor's latest software release deployed a two-year-old version of Tomcat Web Server or a version is close to end-of-support with no planned upgrades, it may be wise to question their commitment to creating secure software.

 

Conclusion

The odds are that some entity will assess your organization's risk, whether it's a group of hackers looking for an easy target, your organization's security officer, or an insurance company deciding whether to issue a cyber-liability insurance policy. Here’s one key metric that will interest these entities: how many unpatched systems and vulnerabilities are lying around on your network and the story your organization's patching tells.

Did you come here hoping to read a summary of the past 5+ years of my research on self-driving, autonomous vehicles, Tesla, and TNC businesses?

Well, you’re in luck… that’s my next post.

 

This post helps make that post far more comprehensible. So here we lay the foundation, with fun, humor, and excitement. (Mild disclaimer for those who may have heard me speak on this topic before or listened to this story shared in talks and presentations. I hope you have fun all the same.)

 

I trust you’re all relatively smart and capable humans with a wide knowledge, depth, and breadth of the world, so for the following image, I want you to take a step back from your machine. In fact, I want you to look at it from as far as you possibly can, without putting a wall between you and the following image.

 

THIS IS A HOT DOG

 

OK, have you looked at the image? Do you know what it is? If you're not sure, and perhaps have a child or aging grandparents, I encourage you to ask them.

 

Did you get hot dog? OK, cool. We’re all on the same page here.

 

Now as a mild disclaimer to my prior disclaimer—I’m familiar with the television series Silicon Valley. I don’t watch it, but I know they had a similar use-case in that environment. This is not that story, but just as when I present on computer vision being used for active facial recognition by the Chinese government to profile the Muslim minority in China and people say, "Oh, you mean like Black Mirror..." I mean “like that,” but I mean it's real and not just "TV magic."

 

Last year in April (April 28, 2018), this innocuous meat enclosure was run through the paces on our top four favorite cloud and AI/ML platforms, giving us all deep insight into how machines work, how they think, and what kinds of things we like to eat for lunch on a random day—the 4th day of July, perhaps. These are our findings.

 

I think we can comfortably say Google is the leader in machine learning and artificial intelligence. You can disagree with that, but you'd likely be wrong. Consider one of the leading machine learning platforms Tensor Flow (TF), which is an open-source project by Google. TF was recently released in 2.0 as an Alpha, so you might think "Yeah, immature product is more like it." But when you peel back that onion a bit and realize Google has been using it internally for 20 years to power search and other internal products, you might feel it more appropriate to call it version 25.0, but I digress.

 

What does Google think a hotdog is?

Image: https://twitter.com/cloud_opinion/status/989691222590550016

 

As you can see, if we ask Google, "Hey, what does that look like to you?" it seems pretty spot on. It doesn't come right out and directly say, "that’s a hot dog," but we can't exactly argue with any of its findings. (I'll let you guys argue over eating hot dogs for breakfast.) So, we're in good hands. Google is invited to MY 4th of July party!

 

But seriously, who cares what Google thinks? Who even uses Google anyway? According to the 2018 cloud figures, Microsoft is actually the leader in cloud revenue, so I'm not likely to even use Google. Instead, I'm a Microsoft-y through and through. I bleed Azure! So, forget Google. I only care about what my platform of choice believes in, because that's what I'll be using.

 

What Microsoft KNOWS a hotdog is!

Image: https://twitter.com/cloud_opinion/status/989691222590550016

 

Let’s peel back the chopped white onion and dig into the tags a bit. Microsoft has us pegged pretty heavily with its confidences, and I can't say I agree more. With 0.95 confidence, Microsoft can unequivocally and definitively say, without a doubt, I am certain:

 

This is a carrot.

 

We're done here. We don't need to say any more on this matter. And to think we Americans are only weeks away from celebrating the 4th of July. So, throw some carrots on the BBQ and get ready, because I know how I'll be celebrating!

 

Perhaps that's too much hyperbole. It’s also 0.94 confident that it's "hot" (which brings me back to my Paris Hilton days, and I'm sure those are memories you've all worked so hard to block out). But... 0.66 gives us relative certainty that if it wasn't just a carrot, it's a carrot stick. I mean, obviously it's a nicely cut, shaved, and cleaned-off carrot stick. Throw it on the BBQ!

 

OK, but just as we couldn't figure out what color “the dress” was, Microsoft knows what’s up, at 0.58 confidence. It's orange. Hey, it got something right. I mean, I figure if 0.58 were leveraged as a percentage instead of (1, 0, -1) it would be some kind of derivative of 0.42 and it would think it’s purple. (These are the jokes, people.)

 

But, if I leave you with nothing more than maybe, just MAYBE, if it's not a carrot and definitely NOT a carrot stick... it's probably bread, coming in at 0.41 confidence. (Which is cute and rings very true of The Neural Net Dreams of Sheep. I'm sure the same is true of bread, too. Even the machines know we want carbs.)

 

But who really cares about Microsoft, anyway? I mean, I'm an AWS junkie, so I use Amazon as MY platform of choice. Amazon was the platform of choice of many police departments to leverage its facial recognition technology to profile criminals, acts, and actions to do better policing and protect us better. (There are so many links on this kind of thing being used; picking just one was difficult. Feel free to read up on how police are trying to use this, and how some states and cities are trying to ban it.)

 

Obviously, Amazon is the only choice. But before we dig into that, I want to ask you a question. Do you like smoothies? I do. My favorite drink is the Açai Super Anti-Oxidant from Jamba Juice. So, my fellow THWACKers, I have a business opportunity for you here today.

 

Between you, me, and Amazon, I think we can build a business to put smoothie stores out of business, and we can do that through the power of robots, artificial intelligence, and machine learning powered by Amazon. Who's with me? Are you as excited as I am?

 

Amazon prefers you to DRINK your Hotdogs

Image: https://twitter.com/cloud_opinion/status/989691222590550016

 

The first thing I'll order will be a sweet caramel dessert smoothie. It will be an amazing confectionery that will put your local smoothie shop out of business by the end of the week.

 

At this point, you might be asking yourself, "OK, this has been hilarious, but wasn't the topic something about businesses? Did you bury the lede?”

 

So, I'll often ask this question, usually of engineers: do businesses want machine learning? And they'll often say yes. But the truth is really, WE want machine learning, but businesses want machine knowledge.

 

It may not seem so important in the context of something silly like a hot dog, but, when applied at scale and in diverse situations, things can get very grave. The default dystopian future or “realistic” future I lean towards is plasma pumps in a hospital. Imagine a fully machine controlled and AI-leveraged device like a plasma pump. It picks out your blood type based on your chart or perhaps it pricks your finger, figures out what blood it should give you, and starts to administer it. Not an unrealistic future, to be honest. Now, what if it was accurate 99.99999% of the time? Pretty awesome, right? But let's say it was as accurate as Google was with a hot dog, at 98% confidence. A business can hardly accept the liability that 99.9999999% might give. Drop that down to 98%, or the more realistic 90-95%, and that is literal deaths on our hands. 

 

Yes, businesses don't really want machine learning. They say they do because it's popular and buzzword-y, but when lives are potentially on the line, tiny mistakes come with a cost. And those mistakes add up, which can affect the confidence the market can tolerate. But hey, how much IS a life really worth? If you can give up your life to make a device, system, or database run by machine learning/AI better for the next person, versus, oh I don't know, maybe training your models better and a little longer—is it worth it?

 

There will come a point or many points where there are acceptable tolerances, and we'll see those tolerances and accept them for what they are at a certain point because, "It doesn't affect me. That's someone else's problem." Frankly, the accuracy of machine learning has skyrocketed in the past few years alone. That compounded with better, faster, smarter, and smaller TPUs (tensor processing units) means we truly are in a next-generation era in the kinds of things we can do. 

 

Google Edge TPU on a US Penny

Image: Google unveils tiny new AI chips for on-device machine learning - The Verge

 

Yes, indeed, the future will be here before we know it. But mistakes can come with dire costs, and business mistakes will cost in so many more ways because we "trust" businesses to do the better or the right thing.

 

"Ooh, ooh! You said four cloud providers. Where's IBM Watson?"

 

Yes, you're right. I wasn't going to forget IBM Watson. After all, they're the businessman's business cloud. No one ever got fired for buying IBM, blah blah blah, so on and so forth.

 

I always include this for good measure. No, not because it’s better than Google, Microsoft, or Amazon, but because of how funny it is.

 

IBM is like a real business cloud AI ML!

Image: https://twitter.com/cloud_opinion/status/989762287819935744

 

I’ll give IBM one thing—they're confident in the color of a hot dog, and really, what more can we ask for?

 

Hopefully, this was helpful, informative, and funny (I was really angling on funny). But seriously, you should have a better foundation for the constantly “learning” nature of the technology we work with. Consider that machines have a large knowledge base of information, and what they can learn in such a short time to make determinations about what a particular object is. Then also consider if you have young children. Send them into the kitchen and say, "Bring me a hot dog," and you're far more likely to get a hot dog back than you are to get a carrot stick (or a sweet caramel dessert). The point being, the machines that learn and the "artificial" intelligence we expect them to work with and operate from have the approximate intelligence of a 2- or 3-year-old.

 

They will get better, especially as we throw more resources at them (models, power, TPUs, CPU, and so forth) but they're not there yet. If you liked this, you'll equally enjoy the next article (or be terrified when we get to the actual subject matter).

 

Are you excited? I know I am. Let us know in the comments below what you thought. It's a nice day here in Portland, so I'm going to go enjoy a nice hot dog smoothie. 

This week's Actuator comes to you from sunny San Diego and Cisco Live! If you are here, and reading this, then you still have time to stop by Booth #1621 and say hello. I enjoy talking data with anyone, but especially with our customers.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Baltimore ransomware attack will cost the city over $18 million

I wonder how much it would have cost for the city to keep their systems up to date, and able to be patched in a timely manner.

 

Many iOS Developers Don’t Use Encryption: Report

So, Apple tries to force the use of encryption, but allows for developers to easily override. This is why we can't have secure things.

 

EU will force electric cars to emit a noise below 20 km/h on July 1

I would pay good money for my car to sound like George Jetson's.

 

Employees are almost as dangerous to business security as hackers and cybercriminals

Always a difficult conversation, when you want to enforce security standards and your colleagues take offense, as if you don't trust them. And yet, they are often the weakest link. Security is necessary to help good people from doing dumb things.

 

Uber’s Path of Destruction

A bit long, but worth every minute. Uber, along with other tech companies, are horrible for our economy. And they will remain so, until we elect legislators that understand not only technology, but the business finance of technology companies.

 

Microsoft and Oracle link up their clouds

The biggest shock in this article was discovering that Oracle has a cloud.

 

AWS launches Textract, machine learning for text and data extraction

Just a quick PSA here, but the code and model driving this service likely has inherent biases in a similar manner to issues with facial recognition tech. "Trust, but verify" should be the disclaimer pasted at the top of the page for this and every other AI-as-a-Service out there.

 

 

This is my first Cisco Live! and I think there's a chance I'm the only DBA here:

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article from my colleague Mav Turner. He offers a good overview of edge computing and offers suggestions on overcoming challenges. I like the idea of using technology like this to reduce latency.

 

Edge computing has become a critical enabler for successful cloud computing as well as the continued growth of the Internet of Things (IoT). Edge computing does come with its own unique challenges, but the advantages far outweigh the challenges.

 

What Is Edge Computing?

To understand why edge computing is so important, let’s first look at cloud computing for government agencies, which is fast becoming a staple of most agency environments.

 

One of the primary advantages of cloud computing is the ability to remove a vast amount of data and processing out of the agency—off-premises—so the federal IT pro no longer has to spend time, effort, and money maintaining it.

 

Logistically, this means agency data is traveling hundreds or thousands of miles back and forth between the end-user device and the cloud. Clearly, latency can become an issue. Enter edge computing, which essentially places micro data centers at the “edge” of the agency network. These micro data centers, or edge devices, serve as a series of distributed way stations between the agency network and the cloud.

 

With edge devices in place, computing and analytics power is still close to end-user devices—eliminating the latency issue—and the data that doesn’t have to be processed immediately is routed to a cloud data center at a later time. Latency problem solved.

 

Overcoming Edge Computing Challenges

The distributed nature of edge computing may seem to bring more complexity, more machines, greater management needs, and a larger attack surface.

 

Luckily, as computing technology has advanced, so has monitoring and visualization technology to help the federal IT pro realize the benefits of edge computing without additional management or monitoring pains.

 

Strategizing

Start by creating a strategy; this will help drive a successful implementation.

 

Be sure to include compliance and security details in the strategy, as well as configuration information. Create thorough documentation to standardize your hardware and software requirements.

 

Be sure patch management is part of the security strategy. This is an absolute requirement for ensuring a secure edge environment, as is an advanced security information and event management (SIEM) tool that will ensure compliance while mitigating potential threats.

 

Monitoring, Visualizing, and Troubleshooting

Equally important to managing edge systems is monitoring, visualization, and troubleshooting.

 

Monitoring all endpoints on the network will be a critical piece of successful edge computing management. Choose a tool that not only monitors remote systems, but provides automated discovery and mapping, so the federal IT pro has a complete understanding of all edge devices.

 

Additionally, be sure to invest in tools that provide full infrastructure visualization, so the federal IT pro can get a complete picture of the network at all times. Add in full network troubleshooting to be sure the team can monitor its entire infrastructure as well.

 

Creating a sound edge computing implementation strategy and using the right tools to monitor and manage the network will ease the pains and let the benefits of edge computing be fully realized.

 

Find the full article on Government Technology Insider.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

If, like me, you believe we’re living in a hybrid IT world when it comes to workload placement, then it would make sense to also consider a hybrid approach when deploying IT infrastructure monitoring solutions. What I mean by this is deciding where to physically run a monitoring solution, be that on-premises or in the public cloud.

 

I work for a value-added reseller, predominantly involved with deploying infrastructure for virtualization, end-user computing, disaster recovery, and Office 365 solutions. During my time I’ve run a support team offering managed services for a range of different-sized customers from SMB to SME, and managing several different monitoring solutions. They range from SaaS products to bolster our managed service offering to customers’ locally installed monitoring applications. Each has its place and there are reasons for choosing one platform over another.

 

Licensing Models

Or what I really mean, how much does this thing cost?

 

Typically for an on-premises solution, there will be a one-off upfront cost and then a yearly maintenance plan. This is called perpetual licensing. The advantage here is that you own the product with the initial purchase, and then maintaining the product support and subscription entitles you to future upgrades and support. Great for a CapEx investment.

 

SaaS offerings tend to be a monthly cost. If you stop paying, you no longer have access to the solution. This is called subscription licensing. Subscription licensing is more flexible in that you can scale up or scale down the licenses required on a month-by-month basis. I will cover more on this in a future blog post.

 

Data Gravity

This is physically where the bulk of your application data resides, be that on-premises or in the public cloud. Some applications need to be close to their data so they can run quickly and effectively, and a monitoring solution should take this into consideration. It could be bad to have a SaaS-based monitoring solution if you have time-sensitive data running on-premises that needs near-instantaneous alerts. Consider time sensitivity and criticality when deciding what a monitoring solution should deliver for you. If being alerted about something up to 15 minutes after an event has occurred is acceptable, then a SaaS solution for on-premises workloads could be a good fit.

 

Infrastructure to Run a Monitoring Solution

Some on-premises monitoring solutions can be resource-intensive, which can prove problematic if the existing environment is not currently equipped to run it. This could lead to further CapEx expenditure for new hardware just to run the monitoring solution, which in turn is something else that will need to be monitored. SaaS, in this instance, could be a good fit.

 

Also, consider this: do you want to own and maintain another platform, or do you want a SaaS provider to worry about it? If we compare email solutions like Exchange and Exchange Online, Exchange is in many instances faster for user access and you have total control over how to run the solution. Exchange Online, however, is usually good enough and removes the hassle of managing an Exchange server. Ask yourself: do you need good enough or do you need some nerd knobs to turn?

 

Conclusion

A lot of this article may seem obvious, but the key to choosing a monitoring solution is to start out with a set of criteria that fits your needs and then evaluate the marketplace. Personally, I’ve started with lists with X, Y, and Z requirements with a weighting on each item. Whichever product scored the highest ended up being the best choice.

 

To give you an idea of those requirements, I typically start out with things like:

  • Is the solution multi-tenanted?
  • How much does it cost?
  • How is it licensed?
  • What else can it integrate with (like a SIEM solution, etc.)?

 

  A good end product is always the result of meticulous preparation and planning.

The bigger the project, the more risk there is. Any decent-sized project needs a professional project manager to run it and help mitigate that risk. When it comes to big projects, great project managers are worth their weight in gold. Over the years, I’ve worked with some horrible project managers and some fantastic project managers.

 

So, what makes a good project manager? A successful project manager will do several things to help keep the project on schedule, including:

 

  • Check in with team members
  • Collate regular feedback from the team members
  • Help team members escalate issues to the right people to resolve them
  • Coordinate meetings with non-project team members
  • Provide positive feedback to help drive the project forward
  • Serve as a liaison between IT and the business
  • Work with the project team on release night to see the project through to completion

 

One of the best experiences I had with a project manager was back when SQL Server 2005 was released in November 2005. We pitched to management to upgrade the loan origination system at the company from SQL Server 2000 to SQL Server 2005. The project manager was fantastic, bringing in development and business resources as needed to test the system prior to the upgrade.

 

The part that sticks out in my mind the most is the night of the upgrade. For other projects with different project managers, the managers would leave by 6 p.m. no matter what. For this release, the project manager came in later in the day with the rest of the folks on the project, and stayed until 4 a.m. while we did the release. When the team needed to order some dinner, our project manager made sure everyone got dinner (and that vegetarians got vegetarian food) and that anything we needed was taken care of. Even though this release went until 4 or 5 a.m. (I don’t remember exactly when we finished), this release went shockingly easy compared to others of which I’ve been a part.

 

On the flip side, I’ve had plenty of project releases that were the exact opposite experience. The project manager collects status updates every week from team members, and the weekly project status meetings are just basically regurgitating these updates back to the rest of the team. When it comes to release night, the project manager in many cases was nowhere to be found, and sometimes unavailable via email or cell phone. This made questions or problems with the release difficult if not impossible to solve.

 

When projects don’t have a good project manager guiding them to completion, it’s easy for the project to fail. On numerous occasions, I’ve seen IT team members attempt to step into the role of project manager. This means someone is filling this role on a part-time basis, which means the project management isn’t getting the full attention it deserves. Having someone not focused on the details means tasks are going to fall through the cracks and things that need to be done aren’t done (or they aren’t done in a timely fashion). All of this can lead to the project failing, sometimes in a rather spectacular or expensive way; and a failed project, for any reason, is never a good thing.

Hybrid IT merges private on-premises data centers with a public cloud vendor. These public cloud vendors have a global reach, near limitless resources, and drive everything through an API. Tapping into this new resource is creating some new technologies and new career paths. Every company today can run a business analytics system. This was never feasible in the past because these systems traditionally took months to deploy and required specialized analysts to maintain and run. Public cloud providers now offer these business analytics systems to anyone as a service and people can get started as easily as swiping their credit card.

 

We’re moving away from highly trained and specialized analysts. Instead, we require people to have a higher holistic view of these systems. Businesses aren’t looking for specialists in one area, they want people who can adapt and merge their foundation with cross-siloed skillsets. People are gravitating more and more towards skills in development. Marketing managers, for example, are learning SQL to analyze data to see if a campaign is working or not. People are seeking to learn skills that allow them to interact with the cloud through computer program languages because while the public cloud may act as a traditional data center, it's still a highly programmable data center.

 

The IT systems operator must follow this same trend and continue to add more software development and scripting abilities to their toolbelt. The new role of DevOps engineer has emerged because of hybrid IT. This role was originally created because of the siloed interaction between software developers, who are primarily concerned about code, and the systems operation team, who must support everything once it hits production. The DevOps role was created to help break the silos and create more communication between the two teams, but as time has passed, the two groups have merged together because they need to rely on each other. Developers showed operations how to run code effectively and efficiently, and SysOps folks showed developers how to monitor, support, and back up the systems on which their code runs.

 

A lot of new skills are emerging, but I want to focus on the three I feel are becoming the heavy favorites: infrastructure as code, CI/CD, and monitoring.

 

Infrastructure as Code (IaC)

According to this blog from Azure DevOps, "Infrastructure as Code is the management of infrastructure (networks, virtual machines, storage) in a descriptive model,” or a way to use code to deploy and manage everything in your environment. A lot of data centers share similar hardware from the same set of manufacturers, yet each is unique. That's because people configure their environments manually. If a change is required on a system, they make the change and complete the task. Nothing is documented, nobody knows what the change was, or how it was implemented. When the issue happens again, someone else repeats the same task or does it differently. As this goes on, our environments begin to morph and change and there's no resetting it, thus causing unique snowflakes.

 

A team that practices good IaC will always make their change in code, and for good reason. Once a task is captured in code, that code is a living document and can be shared and run by others for consistent deployment. The code can be checked into a CI/CD pipeline to test before it hits production and to version it for tracking purposes. As the list of tasks controlled by code gets larger, we can free up people's time by putting the code in a scheduler and start letting our computers manage the environment. This opens the doors for automation and allows people to work on meaningful projects.

 

Continuous Integration/Continuous Deployment (CI/CD)

When you're working in a team of DevOps engineers, you and your team are going to be writing a lot more code. You may be writing IaC code for deploying an S3 bucket, while your team member might be working on deploying an Amazon RDS instance, both writing in an isolated environment. The team needs a way to centrally store, manage, and test the code. Continuous integration helps merge those two different pieces into one unit, but the frequency of merging is very high. Continually merging and testing code helps shorten feedback loops and shorten the time it takes to find bugs in the code. Continuous delivery is the approach of taking what's in the CI code repo and continuously delivering and testing it in different environments. The end goal is to deliver a bug-free code base to our end customer, whether that customer is an internal employee or someone external paying for our services. The more frequently we can deliver our code to different environments like test or QA, the better our product will become.

 

Monitoring

We can never seem to get away from monitoring, but this makes sense given how important it is. Monitoring provides the insight we need into our environments. As we start to deploy more of our infrastructure and applications through code and deliver those resources quickly and through automation, we need a monitoring solution to alert us when there’s an issue. The resources we provide with IaC or ...manually... will be consumed and our monitoring solution will help tell us how far out those resources will be available or if we need to provide additional resources to prevent loss of service. Monitoring gives us historic trends of our environment, so we can compare how things are doing today to what they were doing yesterday. Did that new global load balancer help our traffic or hurt it? In today's security landscape, we need eyes everywhere, and having a monitoring solution can help notify if there's a brute force attack or if a single user is trying to log in from another part of the country.

 

Hybrid IT is a fast-growing market bringing lots of change to businesses as well as creating new careers and new skillsets. From hybrid IT, new careers are emerging because of the need to merge different skillsets from traditional IT, development, and business units. This combination is stretching people’s knowledge. The most successful people are merging what they know about traditional infrastructure with the new offerings from the public cloud.

THWACKcamp is Back! - YouTube

 

Here at SolarWinds, convention season is just beginning to heat up. Whether you’re lucky enough to travel to these shows or are just following our exploits online, you’ll see us across the globe—from London (Info Security Europe) to New York (SWUG), Vegas (Black Hat) San Diego (Cisco Live!), San Francisco (VMworld, Oracle World), and Singapore (RSA)—demoing and discussing the best monitoring features, whether they’re brand-new or just new-to-you.

 

But there’s one conference that, for us, is circled in red marker on our calendar: THWACKcamp 2019, which is so very happening October 16 – 17, 2019.

 

Running the Numbers

Now in its 8th year, THWACKcamp has grown in both quality and quantity each year. Last year, we saw more than 2,300 people attend, consuming 22 hours of content accompanied by real-time discussions in live chat. On top of that, people kept coming back for more, and viewed the recordings of those same sessions over 16,000 more times after THWACKcamp 2018 ended.

 

This coming year promises to be our most ambitious one yet—and not just because we expect more attendees, more content, or more amazing giveaway prizes.

 

Brand New Formula, Same Great Taste!

First, I want to talk about the things that AREN’T changing.

 

THWACKcamp 2019 is still 100% free and 100% online. That means you don’t have to beg your boss for budget, risk one of TSA’s “very personal” pat-down procedures, fight flocks of other IT pros thronging to the next session, or deal with less-than-optimal hotel options.

 

The event is still going to be two full days packed with content and live segments. A legion of SolarWinds folks ranging from Head Geeks to engineers to product managers will be on chat to field questions, offer insights, and take conversations offline if needed.

 

And of course, we’re still going to have some awesome prizes to give away throughout the event. (Look, we know you come for the information, but we also know the prizes add a whole ‘nother level of fun to it and we’re not about to give that up either. We have as much fun brainstorming what cool swag to give away as you do winning it. That said, this thread on THWACK.com will let you offer your ideas on what you’d like to see us give away:  https://thwack.solarwinds.com/message/419618#419618

 

SO... what about the “new and improved” part of THWACKcamp?

The thing you’ll notice most is every session is going to take the time it needs, rather than conforming to a standard 30- to 40-minute window. This allows us to intersperse deep-dive topics with quick 10-minute how-tos, and even a few funny “commercials” to make sure you’re paying attention.

 

The second thing you’ll notice is that we’ve broken out of the studio. We love our set and still have a bunch of the sessions there, but you’ll also see us in discussions in lounge areas, outside, and maybe even on-location at events. IT professionals are rarely “at rest” and THWACKcamp reflects that this year too.

 

Both of those elements allowed us to make one other big improvement: a single track of sessions each day. We’ll be able to cover more topics and ensure that everyone is in the right “room” at the right time to hear all the THWACKcamp-y goodness.

 

And the last thing you’ll notice is how THWACKcamp will be even more interactive than ever. During the sessions, we’re adding an interactive question-and-answer system called Sli.do. If you’ve attended a SWUG (and if you haven’t, you really should! http://THWACK.com/swug), you know exactly how this works. We’ll use it to get your feedback in real-time, find out how many people prefer one feature over another, and you’ll be able to post your own questions for our staff to answer, where it won’t get lost in the mad banter of the live chat window. Speaking of chat, it’ll still be there, in a THWACKcamp “watercooler” section, where you can talk about your experiences with SolarWinds modules, ask for tips on configuring ACLs, or debate the supremacy of the MCU vs. DCU.

 

Take My Money!

By this point, I hope you’re shouting at your screen “BUT LEON, HOW DO I SIGN UP???” If this is you, maybe have the barista bring you a decaf on the next round. And while you’re waiting, head over to the THWACKcamp Registration page THWACKcamp 2019. You’ll be able to sign yourself up for the event and see the full schedule of sessions.

 

But you’ll also gain valuable insight and information between now and October 16. We’ll be sharing exclusive blog posts, videos, and even behind-the-scenes images to give you insight into how an event of this magnitude goes together, and prepare you to get the most out of the THWACKcamp experience.

 

Also exclusive to folks who register will be “Ask Me (Almost) Anything” sessions. After you complete your registration, you’ll get access to that same Sli.do system I mentioned earlier. Once again, using Sli.do, you will have an opportunity to submit questions or upvote questions from other folks. We’ll host five live on-camera sessions between now and October 16 to answer those questions for you. But remember, you only get access to that after you register.

 

So, what are you waiting for? Go register now: THWACKcamp 2019! And don’t be selfish, either. Share that link with coworkers, colleagues, and friends in IT who may be thinking “No way am I going to make it to a conference this year.” Sure you are. Here at SolarWinds we’ve got a solution for you, just like we do for so many of your IT challenges.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.