Monitoring Challenges with Distributed Environments
Many, if not all, medium-to-large business have offices that are geographically distant from each other. With the advent of cloud and hybrid IT, the challenge of monitoring everything everywhere is going to continue. On top of this, the workforce is more mobile. IT needs to monitor, manage, and secure devices in and outside the office. There are many ways to monitor these environments, such as using VPN or MPLS connectivity to create one large network, creating data collection servers at each location, and using agent-based technology to have all monitored devices “call home.” Each method has its pros and cons, along with possible limitations.
In this session, we will be reviewing different tools, methods, and technologies to help you understand the right use cases for each, and make the right decisions to tackle the challenges with monitoring distributed environments.
THWACKcamp! [Electronic beat] THWACKcamp!
In today's world, we have employees and server workloads everywhere.
You mean, actually distributed environments?
Yeah, and let's not forget the complexity that cloud introduces as well.
The expectation is that everything "just" works. Monitoring the performance and availability of devices can present many unique and challenging scenarios. Joining me today are product manager, Steven Hunt, from our Austin office.
Hey, how's it going?
And Shawn Zenz from out Ottawa office.
Thank you for having me.
What I want to do here is put you guys in the hot seat, and throw out some monitoring scenarios and see what you guys can come up with.
Let's do it!
Little U.S. versus Canada, eh?
All right guys, let's start with the first scenario. Let's assume you've got two large offices that are connected via a fast MPLS connection. How would you go about monitoring it?
So, there's a lot of different architectures that could come into play in the scenario, but let's talk about a fairly common one. So, if we're trying to gather information from both the main office and the second office, typically, what we would do is have a server that's collecting my information over in the main office. Depending upon how much is over in my second office, like how many servers, how many computers, what ultimately I'm trying to monitor, I'm going to start looking at, "Do I need to just deploy an agent to connect out there? Or can I just do a direct connection and grab some information?" It also kind of depends on what information that I'm trying to get out of it. So, I may have a scenario where I have an additional server on this side to start polling information, gathering data from all the servers on the second office side. Or I could just do a direct connection from here, over to gather information directly from the servers. So, it really comes down to, "What is it we're trying to gather, and how many different devices are we trying to get it from?"
So, my assumption is, since we have an MPLS connection, really, getting over the second office isn't a problem. It's the possibility of the latency introduced from going from LA to New York, or things along those lines, that could potentially justify a second Orion instance or a poller in the second office.
Yeah, so it really depends on how far apart these connections are. Obviously, there's some limitations on how well that connection's going to be established. You're within a certain distance if you've got that guaranteed connection in place. But it's still important to understand what is needing to be collected in that second office, and then ultimately, how much information do I need to get? One of the key things that people often don't think about is from a collection standpoint, "What am I trying to get? Am I just trying to get basic metric information from an application, or from a server itself? Or am I trying to measure something from one office to the other?" If I'm trying to measure something from one office to the other, then it's really important to understand what's going on in between this connection, and how does that impact the collection aspect of things that are sitting over here in the second office.
Yeah, I think that a great example would be if you're looking at the web performance. If you're monitoring it in the second office, you're going to see the milliseconds. Could be one or two, and it could be double, or two to three or four. Where if you're monitoring from the original office, that one or two millisecond bump is minute, because you've got the buffer of the whole actual transit of getting across to the second office.
Right. The other thing you need to take into account is what happens if this connection fails? If that connection goes down, and I have a whole bunch of devices that I'm trying to monitor on this side, what notifications am I going to start getting? Typically, you're going to start flooding people with information. But if you can designate that all of these devices are dependent upon the connection or the firewall, or something on the secondary side, then you can take that into account. And ensure that once we've determined that there's a problem at this point of the connection, we know that there's an issue getting to the rest of that data, and we don't have to flood anybody with a bunch of information.
Gotcha. Now, would you just put a poller over here, or would you actually do a whole second office? Would a poller queue up the data if you lost connectivity? Or would you put a whole second Orion instance over there?
So, that's a really good question. And again, it depends on what I'm trying to collect and how much of it is there. But the aspect of being able to have the flexibility of a platform that will allow you to just direct connect and monitor stuff agentlessly, or being able to deploy an actual agent on any of the devices out there, and start collecting that information, without having to point additional resources out to actually start collecting that data. But if we're in a scenario where we're collecting a bunch of information or the connection itself could be impacting the amount of data that needs to flow across, then I can stick that second poller inside the second data center, or the second site, and allow it to be responsible for the collection of all the information, and then send it back across to that main site.
Yeah, I know. We've got a couple of large boat carrier cruise lines that actually have Orion instances on the boats, just to collect the data, because doing the whole stuff across the sat-link would just be pointless and kill everything, so they have unique instances on every single boat. So, that would be applicable in this scenario. Though we're looking at boats instead of offices.
And ultimately, if you're trying to get information from one section to the other, you're also potentially trying to get it from at a certain interval, right? So, your polling intervals are set to a certain setting to allow you to get the information you need in the time that you need it. If you have a connection from one site to the other, that is not advantageous for being able to collect that data in that timeframe. Then you're going to have to have a solution that's going to allow you to collect that data and get it across. And then you have to be cognizant about how much real-time data that you're trying to get in a given time, because you can only get so much coming across that link.
Gotcha, gotcha. Now, Shawn, you've been pretty quiet on this scenario; I know you're part of the MSP side of our business; how would you go about monitoring this, or is there anything different? How would your solution look like?
Yeah, for sure. So in this type of environment, where, yes, it is connected via MPLS between the two offices, that type of scenario, that would look like, for example, a probe that's sitting on each one of the different locations here. And the responsibility of that probe is to go out, discover that environment, discover ‘no network here,’ for example. But especially things like the workstations in those environments, and actually distribute agents to each one of those workstations automatically. So they'll go ahead and they'll push agents to those Windows devices, to those Mac devices, for example, and actually do the deployment for me, to go and collect asset information, collect the patch details, and that type of detail from those devices.
Now where does this report to when you're done? Do you have something in the cloud, I assume? Or is there a— it's an on-premises, off premises?
Yeah, so to answer that, the question is both. So there's both a cloud solution as well as an on-premises solution. So, of course, I can have that solution sitting, you know, maybe in a data center, or perhaps we can actually host the solution for the customer. So, in that example, I'm logging into this solution, whether it's in the cloud or whether it's on-premises, and actually, from that console, connecting to each one of those devices; doing that patch management, doing that remote control, what have you.
Oh, patch management. How do we look at that from maybe a core standpoint? I mean, I know we're monitoring devices, but we sure as heck want them to be at least updated so nobody's getting some ransomware.
Yeah, so there's a couple of different aspects. We're talking about MSP-type solutions from Shawn's perspective, and I know most of our MSP customers, they're looking for kind of that complete package, right? Being able to do all that within one.
That single pane of glass, we often say.
Exactly. Oftentimes, I find that a lot of our customers are looking for point solutions. I'm looking for a monitoring solution, or I'm looking for a patch and security solution, or I'm looking for a log analytics solution. It doesn't necessarily have to be fully integrated; it doesn't have to come within one box, one umbrella. Luckily, we're able to provide many of those different types of solutions. But oftentimes, it doesn't have to be so tightly integrated to be able to provide the type of capabilities that customers are looking for. Again, MSP, they're looking for it all to be within one umbrella, right? Whereas with the Orion products, we can— we typically see the customers are going to want, "Okay, I want that monitoring solution," and "I want network monitoring, so I can monitor this aspect of it." "And then I want server monitoring so I can monitor this aspect of it." "Then I want web monitoring so I can measure what's going on from this site to that site," things like that nature.
Awesome. Now, I think you guys gave us a couple different scenarios on deployment methods, on actually how your scenario is, depending on the speed of your connection; what your polling interval is, what you're actually trying to monitor; if you're just trying to see is the second site up? And just send some small packet across and grab that information, then you've got a lot of flexibility, but if you're trying to see beyond the firewall, look at the servers, the desktops, the anti-virus status, things along those lines, you've got different scenarios, but I think both are applicable to any individual. They both can figure it out, and figure out where they want to go next.
All right, let's look at scenario number two. Let's assume you've got a traditional office, and you're adding some cloud in. How would you go about monitoring that?
Well, a lot of the same principles apply in the scenario we were talking about earlier. Typically, we're going to have that main server, that main poller in the main office. The complexity comes into the cloud architecture— really, how are the users or how has the business deployed those different resources within the cloud? So, smaller environments, or more cloud-focused environments— maybe they're transitioning a lot of their workloads into the cloud— they're going to probably have a need to do some remote collection, right? So, you can again, use that agent-based mechanism to deploy an agent on the actual cloud instances, and collect the information that's necessary for that. Another scenario that comes into play, especially with some of our larger customers, where they have established direct connections between their data center and the AWS sites.
This allows them to use the agentless based collection mechanism. So they don't have to worry about, necessarily, agent deployment, but they do have to worry about the network connections, the network configuration, between their main site and AWS service. So, all of the traditional aspects of agentless-based collection apply as well.
So in this example, we use AWS as our cloud provider, and you said we could use agent or agentless technology, depending if you've got your tunneling set up to pull information back to Orion. Do we have anything else that we can leverage since it is AWS? One of the things that we can do, and that our customers take advantage of, is an integration we have with AWS APIs. So, you can actually collect information from AWS as a platform. There are some basic metrics— CPU, IO—that can be collected, as well as just metadata about the actual instances that are running. Gives you kind of an inventory, if you will, of your cloud instances that are running, and plus some of those basic metrics. So that allows you to get a sense of what does the cloud provider understand that you're actually deployed and what you need to monitor. Then we mix that together; we create a seamless view of that cloud provider data, along with the workload and operating system data that we can collect from our agent, our agentless based mechanisms. And put that together so I, as an end-user, can see, "What's the cloud provider telling me?" And then, "What is the operating system telling me about the workloads that are running?"
Gotcha, gotcha. Now in this example, we definitely are doing stuff from Amazon, dials, home, I know that's probably not the right term, but we're collecting from Amazon, polling it to on-site. I know I've been going to shows, and I know we have actually have another THWACKcamp session about this later on, but we do have the possibility to actually now put— or, we have customers putting Orion out here, and having the data go upstream, right?
Right, so we have many customers out there that are deploying Orion in the cloud. So, all it takes is basically right sizing, getting the right size instances to deploy Orion. And then you can run that literally in any of the AWS data centers, be able to, you know— we were talking about that multi-site scenario earlier. In that situation, your main site is effectively the cloud, the public cloud, and then it can collect all of the information from any of the sites that you have geographically dispersed.
Sweet. No, I think that gives us a good idea that treating the cloud is not any different than just a second site. You can pull the data, and there's not any limitations to it using the core Orion products. Shawn, do you— how would you manage this and tackle this scenario from a MSP point of view?
Yeah, for sure. When you look at the MSP solutions, it's very much the new normal for a lot of MSPs now to have the— what was traditionally an on-premises server sitting in each one of these small SMB environments, to now be sitting in a Colo facility, or in some cases, of course, sitting in Amazon. So what that looks like today, is that we have the capability to deploy an agent to those devices that's sitting within that AWS instance, and that's calling home, per se. So that's calling back to, perhaps, the MSP solution, that's also in AWS, or even calling back to that solution that's sitting in the facility, in the office of that managed service provider. And it's reporting back details like information on the server, utilization on the server, for example. And of course, includes capabilities that even allow you to take corrective actions if you need to. So, trigger events and correct itself, in some scenarios.
Awesome. Now, I know we're not treating AWS any differently as our main office in these particular examples, but what if we have a second cloud provider? Does that actually change the solution, any on architecture, deployment method, or anything along those lines?
From an Orion standpoint, it really doesn't. We're still looking at being able to collect the workloads, the information about the cloud VMs, cloud instances, depending upon which cloud provider you're talking about, and that allows you to grab the information from the operating system, from the application. It could be AWS, it could be Azure, it could be Alphabet, Google, or name any other of the third-party cloud providers out there that many customers may be using.
Maybe an MSP that's hosting a small cloud?
So, that gives you kind of a sense of a still, "I can have my poller in the main office, I can have mixed workloads, mixed cloud environments, where I need to grab information coming from my servers, from one side or the other, but I need to see that all together, right?" Everything that I'm collecting from my main data center, from the cloud itself, I need to be able to see all of that information there. I mean, that's the world that a lot of our customers are living in today, right? That hybrid IT world. It's important to be able to give them the visibility into that.
Awesome. Shawn, anything different from your point of view? Or do you just— just another location, just throw an agent out there again?
Exactly, it's just another location. The comment there about a mixed environment, I think, is super interesting, especially for an MSP. As an MSP, as I take over a customer, sometimes that server sitting in AWS, and maybe that's my preferred place to put my environment, but that's not always true. They walk into environments that sometimes it is Azure, sometimes it is a third party. And by deploying an agent to any one of those devices, it calls home regardless of really where it is. So I can take over those environments, I can create that continuity between all my customers, and create that single pane of glass that we talk about in MSP, regardless of where those devices exist.
Awesome, no, because I— as more workloads are going to the cloud, the hybrid IT is now the new norm. We at SolarWinds, we're not training it anything different, we're still able to grab the data, put it into the same stuff you're using right now. We've got solutions for multiple clouds, or just AWS. We're able to pull that all in and report on it as if it was just in your current data center right now.
Yeah, our customers don't really care-- I mean, they ultimately do care where their workloads are running, but what they really care about is being able to get information about that data, regardless of where it's running. So, it's important for them to have a solution that allows that flexibility. So, especially those that are transitioning from maybe a traditional all data center environment, to a hybrid or to all cloud, they need something that's going to be able to transition with their business as well.
Awesome, no, I think that does a good job of explaining where we're going to be on the hybrid IT, and we'll move onto the next scenario. In this one, we've got a main office, and we've got a couple remote offices, but they're not connected by any type of internet. I don't know if you guys noticed the theme where I'm going kind of big connection, big office, and then bigger or smaller office, smaller, no connection.
Kind of noticed, yeah.
Oh, okay, okay. So since you noticed, you win the prize, and I'll let you go first, because this seems a little more MSP-ish. How would you go about monitoring it or deployment methodology?
Yeah, for sure. This, to me, is the bread and butter of the MSP solution. A handful of small, remote offices, and of course, a main office in this scenario. In the MSP world, what it looks like, as an MSP, is that I may have a customer that has a main office and a bunch of remotes, but keep in mind, I may have 100, 200 customers of all different shapes and sizes; really the core competency of the MSP solution. So what that looks like, is within each one of these remote offices, I have a probe deployed to these devices. And what that probe is doing, is it's scanning that environment, it's finding anything that has an IP address, and it's deploying out agents to those Windows devices, and discovering asset information, and setting up the monitoring. The other thing that probe is doing for me, is it's scanning and discovering those network gear, and monitoring the network gear for me automatically. So when it comes across— I'm assuming there's a firewall, of course, and a router and a managed switch at each one of these locations.
Absolutely. It's going to go ahead and discover those devices, classify them, and set up the details I would want as an IT professional. Things like traffic monitoring, and latency to those devices, and of course, simple things like up time. That probe is going to stick around in that environment, and those agents being deployed, of course, to those devices, and it's going to be providing a mechanism and a place to start doing things like patch management. So, it can actually work as a cache, and download patches that are necessary for each one of those locations, and serve it to each one of those agents or devices in those scenarios. What's really neat about that is that it's only going to download the patches that are needed at that particular site, and it's doing that based on the fact that the agent can sit there and discover, "Well, is a patch missing?" And give the technician, or give the IT service provider, the power to approve, decline, remove, what have you. And that's for both, of course, Windows and third-party patching in this environment.
So in the scenario you're describing, are you saying that these are all different customer sites, and that this is the actual MSP? Or are you saying that this is one big organization?
Yeah, so in my scenario, I have the MSP as the main office, as you have drawn here, and each one of my customers are those remote offices. But this very much holds true if this was one large organization, as well. Whether the MSP is the main office or not, we can put that probe at each one of these locations and phone home. There's of course a concept within the MSP solution as well, of a main office and site locations as well, as they collapse into a main office, for example.
Now, Steve, you've been quiet, I know it's hard to do. Can we tackle a scenario like this using the Orion products?
Absolutely. So where Shawn was talking about the probes being deployed in the remote office, we can do a lot of the same aspect. If you have an on-premises solution that you want to maintain, having Orion deployed, say, in the main office. And then I want to deploy agents out to each other remote office, just because I only have a small set of devices that I want to collect from. There's a little bit of trick to that, just depending upon the architecture, right? We don't have, necessarily, a direct connection between the two; we don't have point to point, or we don't have anything like that, we're purely going over the internet. One thing that you'll want to do is have a poller probably exposed in a DMZ. So allowing an agent to phone home back to an Orion poller, and it can send the data to it, and then all you have to do is just expose one port out there for those agents to communicate with, and then at that point, you have an easy communication between your main Orion server and that DMZ-based poller. That allows all of your remote offices to send all their data necessary. Now if you have a larger remote office, you— more devices that you want to collect, you may think about putting remote pollers out at each of the office, and collecting that information, and sending it back to the main site, as well.
So you could bypass the DMZ if you had the pollers, or you would still need the DMZ to...
Most of that's really going to depend upon what are your business requirements? Security organizations may say, "Hey, we have to have a certain level of security aspect to that information traveling across," so they may require a— something being in the DMZ to collect all that information, so it really comes down to your organization and what the requirements are, but we have an architecture that supports either scenario.
Okay, because this scenario actually came from an email thread I was reading on, where it was an organization from Brazil. They had a bunch of remote gas stations and they just had basic DSL. But they wanted to look at the little security cameras, and printers, and whatever else was at each location, so they actually put the agent on the POS machines, and then used that to collect the information from the security camera, the printer and all that--and then basically piped that information back home.
Yeah, and that's really the benefit and the flexibility of the agent-based methodology, is— depending upon what devices you have out there, we have an ability to allow you to deploy. That agent may be on one of those devices, that point-of-sale device, as opposed to having to have a dedicated box there for collection of information. You can just put it on that point-of-sale device, have the agent running, collect the information, and then again, send it back to the main office. So Steve, I mentioned, then you mentioned, putting agent on the point-of-sale device. Now, is there another way we could maybe put a SAM agent on something else and use it as a little probe out there?
Yeah, so one of the things that we have available; we can actually deploy the agent to a Linux ARM device. So, with the Orion agent for Linux, there is ability to install that agent on ARM-based devices, and then you don't have to worry about necessarily trying to find a Windows device or another x86 device. You can actually have that deployed on there, so it has a really small footprint. Allows you to just use that very small, in some instances, an easily disposable device; if it breaks, something happens to it, it's not that big a deal.
Yeah, they're like 50 bucks.
Right, so then you can start the collection of the information with that simple device, and it's very easy and cost-effective to deploy that.
Yeah, no, I think that would be, like you said, really easy. Just throw it in the corner, it'll sit there, gather the data, and then shoot it back home for you to manage and monitor.
I know we've talked about MSP just prior as a complete solution, but do you have anything more to add to this diagram on how you would monitor it, or options?
Yeah, for sure. When you start talking about all these different remote locations, and as a IT service provider that has to do the whole gambit, I have to start providing things like not just patch management, but also start providing the security solutions in these environments, too. So what that looks like in the MSP solution, is that each one of these Windows devices, for example. I have the capability to deploy out an antivirus solution, and manage that from that single pane of glass we were talking about earlier. So that means that, as a technician, or as a service provider, from that one console across my, say, 250 customers, I am doing that patch approval, I'm doing that antivirus, exclusions, configuration, remediation, of course, and doing that in bulk, or through policies for each one of my end devices. And being as it's a policy-driven solution, meaning that I can create these core profiles, these core settings, and reuse them over and over again, I can create that scenario that, for those more advanced users, for those users that need more flexibility, I'm creating an AV solution that, perhaps, is a little bit more forgiving, a little bit lighter. It allows them to explore the depths of the internet a little bit more. But of course, for those users that create more tickets, and struggle more with their computers, I have a more locked-down solution. And I'm able to change that on the fly, should I need to, or disable it, when I need to do some troubleshooting, for example.
All right, so onto our final scenario, which I think is right up your alley. We've got a small to medium business or startup. I keep picturing Silicon Valley, working from the kitchen table. No actual server infrastructure, just a lot of guys on laptops, or gals, and then SaaS-based—email, CRM, documentation, whatever. So, no infrastructure outside of laptops. How would you handle this?
Yeah, for sure. So, in this type of environment, as an MSP, how I'd handle something like this is, as you mentioned, there's no server in this environment. So I'm putting agents on each one of these devices. Now, these devices could be Windows, of course, which means I'm getting my patch management, we're doing AV, you know, those type of details. But in my mind, when you said Silicon Valley, I went MacBooks.
Everyone has a MacBook. So with that scenario, of course, we're deploying Mac agents to these devices. So with these Mac agents, I'm getting the capability to do scripting to them. So as an MSP, my goal, my everyday goal is to drive efficiency, save time, as an MSP. So with these Mac devices, I can schedule reoccurring scripting, I can do my maintenance automatically. Then, with these devices as well— well, I know Windows devices, they get all the hype when it comes to patch management, but I of course can script the patch management from my Mac devices too. So I can do that shell connection, I can run that script on those devices, and make sure they're up to date. Now, with these devices when we start talking about the small startups that I'm going to assume are roaming around, as well. They're going to your local coffee shop.
I stayed at a Best Western, yeah, one of those guys?
Exactly. They have unique support requests. So, because this device is roaming around the world, I need to make sure that when they are sitting at a Best Western and they need help from that IT service provider, that I can get to them on demand. So with that being said, these agents come bundled with remote control software. That, as long as that device is online, and whether that's a Mac, whether that's Windows, it really doesn't matter. As long as that device is online, it's calling home. It's reporting status back to my MSP solution, and I can, with a click of a button, get access to that device, get onto the desktop, troubleshoot with that user, and do the things that I need to on a remote control software like doing file transfers and stuff like that. That's all available through that device. So I saw this scenario as me being an MSP, how I'm providing services to these devices. There's a secondary component that comes along with that: the needs of an MSP, which are "how do I show value?" So these devices are reporting back asset information, the statuses of my monitoring is all rolling up into reports that, traditionally, the consumer of that style, or consumer of that report, I should say, is the business owner. Or in my startup, is the young business owner, in my startup. But that's my perspective when it comes from, if I'm providing services as an MSP. But in this scenario, I also envision, of course— well, it is a startup, it is a small organization, maybe there is that, whether it's a business owner, another technical user in that environment, that is providing support to the other devices in that environment. But it doesn't really matter if they are an MSP in that scenario, they can still log into the same solution, see the same dashboard, and provide the same style support to each one of those devices. And like I said, regardless of it's a Mac or a Windows device, or what have you.
Now I know, you're not able to support on the software side, the CRM and all that, but at least using your solution, you're able to say, "Hey, you're not even online, that's why email's not working." You could at least sort of troubleshoot the internet a little bit, if they're reporting in or not, because I've had customers— I came from an MSP background, that they were at a café, and didn't have internet, and then they're sitting there calling in, "Why is email not working?"
With no further details, then why is email not working?
It's not email, and in this case, he was on-premises, but you don't have internet, therefore email's not working So, I'd assume you'd have used the process of elimination to help a little bit on the SaaS side, by just going, "Hey, you've got high latency," or "It is slow." Yeah, you're experienced at checking out your stuff is going to be bad or good. Now, typically, we don't necessarily think of this as an Orion solution, but we do have plenty of people that go from a MSP, "Hey, I've got a couple users," and then they gradually get bigger and bigger, and still don't have any actual infrastructure that they do embrace the cloud. Are there ways to transition from MSP to possibly core/Orion- type platform stuff?
So from my perspective, it comes down to the economies of scale, right? A lot of customers, when they're looking for an MSP solution, it's because they typically don't have the economies of scale that require them to own their own solution, manage their own solution for understanding what's going on in their environment. But if— ideally they're successful, the business has grown, and they've started to go beyond those economies of scale. Maybe they're starting to hire a few IT guys. In that scenario, you still have those devices out there, you still have them sitting at the kitchen table, or at the local coffee shop, and they're needing to get access to applications. So if they don't have a data center, or an infrastructure that kind of supports the on-premises model— if they want to actually understand, okay, what is the state of SaaS applications I'm using? What does the typical interaction of what the user's going through look like? They could actually deploy an Orion instance in the cloud. We were talking about that aspect earlier. They could deploy it in there; they can look at things like, what is happening with regards to the typical web transactions, right? The login aspect of the user trying to reach their SaaS application. Try to hit different pages associated with that. They could actually record that transaction using something like Web Performance Monitor, and then play back that synthetic transaction. And then from a just "How is this SaaS application functioning when it's trying to do that typical user interaction, what does that look like?" "Is that problematic?" We may not necessarily understand, directly from user A to that SaaS application, but we can at least understand user A's interaction with that application from a synthetic standpoint. If you want to go a little bit more in depth, and say I want to just monitor those services as a whole. I want to look at the auth and service aspect of those SaaS solutions; I can also deploy something like Server and Application Monitor. That'll give me a sense of actually, I can script up and understand— Office 365 is a perfect example, right? What are the O365 services? What's the health of those services? What's the health of my account associated with those services? And ensure from a synthetic transaction, and from a service health and availability standpoint, I can see all of that within one thing. And we don't have to have, again, an instance of Orion running back here, because there is no main office site, right? With the flexibility of being able to deploy that in a public cloud provider. I can have it there; I can reach out to these different SaaS solutions, get a perspective on them, and then ensure that my user's experience should be ideal. And then it comes back to what's happening on the end user device, right? I know that everything from a SaaS application standpoint is all functioning the way I expect it to be. If I were a remote user— essentially, Orion, the cloud is that remote user for you, so it gives you that sense. But then there's still that aspect of needing to understand what's happening on the end user device, so there's that...
So the MSP still could come into play, though? That they're monitoring the desktop, saying it's online. You have rebooted recently, you don't have 500 things open, this guy should be working, it's working from the cloud. Your hosted instance over, that everything looks like it should be good to go.
Right, and again, if they've grown, and they have economies of scale, they might want to look at something like Dameware. So, they may transition off that MSP-hosted solution, someone taking care of it for them, and then have that— their own solution available for them to then support their end- users as they need to. And again, it all comes back to economies of scale. What is the size of the organization? Does it make cost-effective sense to have someone manage it for me, through an MSP-based solution? Or does it make sense for me to own my own IT aspects, and ensure the delivery to my end-users?
Awesome work, guys. Thanks for being in the hot seat and coming up with some unique answers to these various scenarios.
It really was pretty easy.
Yeah, it wasn't that hard.
I see how it is. As you just heard and saw, monitoring organizations that are large or small,
Or in one location or many.
Or the cloud.
Yes, multiple locations, or multiple clouds, can provide unique monitoring environments. But with a little bit of thinking, each can successfully be monitored. For THWACKcamp, I'm Jared Hensle.
I'm Steven Hunt.
And I'm Shawn Zenz, and thanks for watching.