Monitoring Central

3 Posts authored by: matt.pitcher Employee

We’re no strangers to logging from Docker containers here at SolarWinds® Loggly®. In the past, we’ve demonstrated different techniques for logging individual Docker containers. But while logging a handful of containers is easy, what happens when you start deploying dozens, hundreds, or thousands of containers across different machines?

In this post, we’ll explore the best practices for logging applications deployed using Docker Swarm.

Intro to Docker Swarm

Docker Swarm is a container orchestration and clustering tool from the creators of Docker. It allows you to deploy container-based applications across a number of computers running Docker. Swarm uses the same command-line interface (CLI) as Docker, making it more accessible to users already familiar with Docker. And as the second most popular orchestration tool behind Kubernetes, Swarm has a rich ecosystem of third-party tools and integrations.

A swarm consists of manager nodes and worker nodes. Managers control how containers are deployed, and workers run the containers. In Swarm, you don’t interact directly with containers, but instead define services that define what the final deployment will look like. Swarm handles deploying, connecting, and maintaining these containers until they meet the service definition.

For example, imagine you want to deploy an Nginx web server. Normally, you would start an Nginx container on port 80 like so:

$ docker run --name nginx --detach --publish 80:80 nginx

With Swarm, you instead create a service that defines what image to use, how many replica containers to create, and how those containers should interact with both the host and each other. For example, let’s deploy an Nginx image with three containers (for load balancing) and expose it over port 80.

$ docker service create --name nginx --detach --publish 80:80 --replicas 3 nginx

When the deployment is done, you can access Nginx using the IP address of any node in the Swarm.

Best Practices for Logging in Docker Swarm 1
© 2011-2018 Nginx, Inc.

To learn more about Docker services, see the services documentation.

 

The Challenges of Monitoring and Debugging Docker Swarm

Besides the existing challenges in container logging, Swarm adds another layer of complexity: an orchestration layer. Orchestration simplifies deployments by taking care of implementation details such as where and how containers are created. But if you need to troubleshoot an issue with your application, how do you know where to look? Without comprehensive logs, pinpointing the exact container or service where an error occurred can become an operational nightmare.

On the container side, nothing much changes from a standard Docker environment. Your containers still send logs to stdout and stderr, which the host Docker daemon accesses using its logging driver. But now your container logs include additional information, such as the service that the container belongs to, a unique container ID, and other attributes auto-generated by Swarm.

Consider the Nginx example. Imagine one of the containers stops due to a configuration issue. Without a monitoring or logging solution in place, the only way to know this happened is by connecting to a manager node using the Docker CLI and querying the status of the service. And while Swarm automatically groups log messages by service using the docker service logs command, searching for a specific container’s messages can be time-consuming because it only works when logged in to that specific host.

 

How Docker Swarm Handles Logs

Like a normal Docker deployment, Swarm has two primary log destinations: the daemon log (events generated by the Docker service), and container logs (events generated by containers). Swarm doesn’t maintain separate logs, but appends its own data to existing logs (such as service names and replica numbers).

The difference is in how you access logs. Instead of showing logs on a per-container basis using docker logs <container name>, Swarm shows logs on a per-service basis using docker service logs <service name>. This aggregates and presents log data from all of the containers running in a single service. Swarm differentiates containers by adding an auto-generated container ID and instance ID to each entry.

For example, the following message was generated by the second container of the nginx_nginx service, running on swarm-client1.

# docker service logs nginx_nginx  nginx_nginx.2.subwnbm15l3f@swarm-client1 | 10.255.0.2 - - [01/Jun/2018:22:21:11 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0" "-"

To learn more about the logs command, see the Docker documentation.

 

Options for Logging in Swarm

Since Swarm uses Docker’s existing logging infrastructure, most of the standard Docker logging techniques still apply. However, to centralize your logs, each node in the swarm will need to be configured to forward both daemon and container logs to the destination. You can use a variety of methods such as Logspout, the daemon logging driver, or a dedicated logger attached to each container.

 

Best Practices to Improve Logging

To log your swarm services effectively, there are a few steps you should take.

 

1. Log to STDOUT and STDERR in Your Apps

Docker automatically forwards all standard output from containers to the built-in logging driver. To take advantage of this, applications running in your Docker containers should write all log events to STDOUT and STDERR. If you try to log from within your application, you risk losing crucial data about your deployment.

 

2. Log to Syslog Or JSON

Syslog and JSON are two of the most commonly supported logging formats, and Docker is no exception. Docker stores container logs as JSON files by default, but it includes a built-in driver for logging to Syslog endpoints. Both JSON and Syslog messages are easy to parse, contain critical information about each container, and are supported by most logging services. Many container-based loggers such as Logspout support both JSON and Syslog, and Loggly has complete support for parsing and indexing both formats.

 

3. Log to a Centralized Location

A major challenge in cluster logging is tracking down log files. Services could be running on any one of several different nodes, and having to manually access log files on each node can become unsustainable over time. Centralizing logs lets you access and manage your logs from a single location, reducing the amount of time and effort needed to troubleshoot problems.

One common solution for container logs is dedicated logging containers. As the name implies, dedicated logging containers are created specifically to gather and forward log messages to a destination such as a syslog server. Dedicated containers automatically collect messages from other containers running on the node, making setup as simple as running the container.

 

Why Loggly Works for Docker Swarm

Normally you would access your logs by connecting to a master node, running docker service logs <service name>, and scrolling down to find the logs you’re looking for. Not only is this labor-intensive, but it’s slow because you can’t easily search, and it’s difficult to automate with alerts or create graphs. The more time you spend searching for logs, the longer problems go unresolved. This also means creating and maintaining your own log centralization infrastructure, which can become a significant project on its own.

Loggly is a log aggregation, centralization, and parsing service. It provides a central location for you to send and store logs from the nodes and containers in your swarm. Loggly automatically parses and indexes messages so you can search, filter, and chart logs in real-time. Regardless of how big your swarm is, your logs will be handled by Loggly.

 

Sending Swarm Logs to Loggly

The easiest way to send your container logs to Loggly is with Logspout. Logspout is a container that automatically routes all log output from other containers running on the same node. When deploying the container in global mode, Swarm automatically creates a Logspout container on each node in the swarm.

 

To route your logs to Loggly, provide your Loggly Customer Token and a custom tag, then specify a Loggly endpoint as the logging destination.

# docker service create --name logspout --mode global --detach --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/etc/hostname:/etc/host_hostname:ro -e SYSLOG_STRUCTURED_DATA="<Loggly Customer Token>@41058 tag=\"<custom tag>\"" gliderlabs/logspout syslog+tcp://logs-01.loggly.com:514

You can also define a Logspout service using Compose.

#

 docker-compose-logspout.yml  version: "3"  networks:   logging:  services:   logspout:     image: gliderlabs/logspout     networks:       - logging     volumes:       - /etc/hostname:/etc/host_hostname:ro       - /var/run/docker.sock:/var/run/docker.sock     environment:       SYSLOG_STRUCTURED_DATA: "<Loggly Customer Token>@41058"       tag: "<custom tag>"     command: syslog+tcp://logs-01.loggly.com:514     deploy:       mode: global

Use docker stack deploy to deploy the Compose file to your swarm. <stack name> is the name that you want to give to the deployment.

# docker stack deploy --compose-file docker-compose-logspout.yml <stack name>

As soon as the deployment is complete, messages generated by your containers start appearing in Loggly.
Best Practices for Logging in Docker Swarm 2

Configuring Dashboards and Alerts

Since Swarm automatically appends information about the host, service, and replica to each log message, we can create Dashboards and Alerts similar to those for a single-node Docker deployment. For example, Loggly automatically breaks down logs from the Nginx service into individual fields.

Best Practices for Logging in Docker Swarm 3

We can create Dashboards that show, for example, the number of errors generated on each node, as well as the container activity level on each node.

Best Practices for Logging in Docker Swarm 4

Alerts are useful for detecting changes in the status of a service. If you want to detect a sudden increase in errors, you can easily create a search that scans messages from a specific service for error-level logs.

Best Practices for Logging in Docker Swarm 5

You can select this search from the Alerts screen and specify a threshold. For example, this alert triggers if the Nginx service logs more than 10 errors over a 5-minute period.

Best Practices for Logging in Docker Swarm 6

Conclusion

While Swarm can add a layer of complexity over a typical Docker installation, logging it doesn’t have to be difficult. Tools like Logspout and Docker logging drivers have made it easier to collect and manage container logs no matter where those containers are running. And with Loggly, you can easily deploy a complete, cluster-wide logging solution across your entire environment.

How do PHP logging frameworks fare when pushed to their limits? This analysis can help us decide which option is best for our PHP applications. Performance, speed, and reliability are important for logging frameworks because we want the best performance out of our application and to minimize loss of data.

Our goals for the fastest PHP framework benchmark tests are to measure the time different frameworks require to process a large number of log messages, considering various logging handlers, as well as which logging frameworks are more reliable at their limits (dropping none or less messages).

The frameworks we tried are:

  • native PHP logging (error_log and syslog built-in functions)
  • KLogger
  • Apache Log4php
  • Monolog

All of these frameworks use synchronous or “blocking” calls, as PHP functions typically do. The web server execution waits until the function/method call is finished in order to continue. As for the handlers: error_log, KLogger, Log4php, and Monolog can write log messages to text file, while error_log/syslog, Log4php, and Monolog are able to send the messages to the local system logger. Finally, only Log4php and Monolog allow remote Syslog connections.

NOTE: The term syslog can refer to various things. In this article, this includes the PHP function of the same name, the system logger daemon (e.g. syslogd), or a remote syslog server (i.e. rsyslog).

Application and Handlers

For this framework benchmark, we built a PHP CodeIgniter 3 web app with a controller for each logging mechanism. Controller methods echo the microtime difference before and after logging, which is useful for manual tests. Each controller method call has a loop that writes 10,000 INFO log messages in the case of file handlers (except error_log which can only produce E_ERROR), or 100,000 INFO messages to syslog. This helps us stress the logging system while not over-burdening the web server request handler.

NOTE: You may see the full app source code at https://github.com/jorgeorpinel/php-logging-benchmark

 

For the local handlers, first we tested writing to local files and kept track of the number of written logs in each test. We then tested the local system logger handler (which uses the /dev/log UNIX socket by default) and counted the number of logs syslogd wrote to /var/log/syslog.

As for the “remote” syslog server, we set up rsyslog on the system and configured it to accept both TCP and UDP logs, writing them to /var/log/messages. We recorded the number of logs there to determine whether any of them were dropped.

Benchmarking PHP Logging Frameworks A
Fig. 1 System Architecture – Each arrow represents a benchmark test.

Methodology

We ran the application locally on Ubuntu with Apache (and mod-php). First, each Controller/method was “warmed up” by requesting that URL with curl, which ensures the PHP source is already precompiled when we run the actual framework benchmark tests. Then we used ApacheBench to stress test the local web app with 100 or 10 serial requests (file or syslog, respectively). For example:

ab -ln 100 localhost:8080/Native/error_log

ab -ln 10 localhost:8080/Monolog/syslog_udp

The total number of log calls in each test was 1,000,000 (each method). We gathered performance statistics from the tool’s report for each Controller/method (refer to figure 1).

Please note in normal operations the actual drop rates should be much smaller, if any.

Hardware and OS

We ran both the sample app and the tests on AWS EC2 micro instance. It’s set up as a 64-bit Ubuntu 16.04 Linux box with an Intel(R) Xeon(R) CPU @ 2.40GHz processors and 1GiB of memory, and an 8 GB storage SSD.

Native tests

The “native” controller uses a couple of PHP built-in error handling functions. It has two methods: one that calls error_log, which is configured in php.ini to write to a file, and one that calls syslog to reach the system logger. Both functions are used with their default parameters.

error_log to file

By definition, no log messages can be lost by this method as long as the web server doesn’t fail. Its performance when writing to file will depend on the underlying file system and storage speed. Our test results:

error_log (native PHP file logger)
Requests per sec23.55 [#/sec] (mean)
Time per request42.459 [ms] (mean)
↑ Divide by 10,000 logs written per request.
NOTE: error_log can also be used to send messages to system log, among other message types.

syslog

Using error_log when error_log = syslog in php.ini, or simply using the syslog function, we can reach the system logger. This is similar to using the logger command in Linux.

syslog (native PHP system logger)
Requests per sec0.25 [#/sec] (mean)
Time per request4032.164 [ms] (mean)  ← ÷ 100,000 logs sent per request

This is typically the fastest logger, and syslogd is as robust as the web server or more, so no messages should be dropped (none were in our tests). Another advantage of the system logger is that it can be configured to write to a file and to forward logs via network.

KLogger test

KLogger is a “simple logging class for PHP” with its first stable release in 2104. It’s only able to write logs to file. Its simplicity helps its performance, however. KLogger is PSR-3 compliant: It implements the LoggerInterface.

K2Logger (simple PHP logging class)
Requests per sec14.11 [#/sec] (mean)
Time per request70.848 [ms] (mean)  ← Divide by 10,000 = 0.0070848 ms / msg
NOTE: This GitHub fork of KLogger allows local syslog usage as well. We did not try it.

Log4php tests

Log4php, first released in 2010, is one in the suite of loggers that Apache provides for several popular programming languages. Logging to file, it turns out to be a speedy contender, at least on Apache. Running the application on Apache probably helps the performance of Log4php. In local tests using PHP’s built-in server (php -S command), it was actually the slowest contender!

Log4php (Apache PHP file logger)
Requests per sec18.70 [#/sec] (mean) * 10k = 187k msg per sec
Time per request53.470 [ms] (mean) / 10k = .0053 ms / msg

As for sending to syslog, it was actually our least performant option, but not by far:

Log4php to syslog
Local syslog socketSyslog over TCP/IPSyslog over UDP/IP
0.08 ms per logAround 24 ms per log0.07 ms per log
0% dropped0% dropped0.15% dropped

Some of the advantages Log4php has, which may offset its lack of performance, are Java-like XML configuration files (same as other Apache loggers, such as the popular log4j), six logging destinations, and three message formats.

NOTE: Remote syslog over TCP however, doesn’t seem to be well supported at this time. We had to use the general-purpose LoggerAppenderSocket, which was really slow, so we only ran 100,000.

Monolog tests

Monolog, like KLogger, is a PSR-3; and, like Log4php, a full logging framework that can send logs to files, sockets, email, databases, and various web services. It was first released in 2011.

Monolog features many integrations with popular PHP frameworks, making it a popular alternative. Monolog beat its competitor Log4php in our tests, but is still not the fastest PHP framework nor most reliable of options, although probably one of the easiest for web developers.

Monolog (full PHP logging framework)
Requests per sec4.93 [#/sec] (mean) x 10k
Time per request202.742 [ms] (mean) / 10k

Monolog over Syslog:

Monolog over syslog
UNIX socketTCPUDP
0.062 ms per log0.06 ms per log0.079 ms per log
Less than 0.01% dropped0.29% dropped0% dropped

Now let’s take a look at graphs that summarize and compare all the results above. These charts show the tradeoff between using faster native or basic logging methods, more limited and lower level in nature vs. relatively less performant but full-featured frameworks:

Local File Performance Comparison

Benchmarking PHP Logging Frameworks 2
Fig 2. Time per message written to file [ms/msg]

Local Syslog Performance and Drop Rates

Log handler or “appender” names vary from framework to framework. For native PHP, we just use the syslog function (Klogger doesn’t support this); in Log4php, it’s a class called LoggerAppenderSyslog; and it’s called SyslogHandler in Monolog.

Benchmarking PHP Logging Frameworks 3
Fig 3. Time per message sent to syslogd via socket [ms/msg]

Benchmarking PHP Logging Frameworks 4
Fig 4. Drop rates to syslogd via socket [%]

 

Remote Syslog Performance and Drop Rates

The appenders are LoggerAppenderSocket in Log4php, SocketHandler and SyslogUdpHandler for Monolog.

To measure the drop rates, we leveraged the $RepeatedMsgReduction config param of rsyslog, which collapses identical messages into a single one and a second message with the count of further repetitions. In the case of Log4php, since the default message includes a timestamp that varies in every single log, we forwarded the logs to SolarWinds® Loggly® (syslog setup in seconds) and used a filtered, interactive log monitoring dashboard to count the total logs received.

TCP

Benchmarking PHP Logging Frameworks 6
Fig 5. Time per message sent via TCP to rsyslog

Benchmarking PHP Logging Frameworks 5
Fig 6. Drop rates to rsyslog (TCP) [%]

UDP

Benchmarking PHP Logging Frameworks 8
Fig 7. Time per message sent on UDP to rsyslog
Benchmarking PHP Logging Frameworks 7
Fig 8. Drop rates to rsyslog (UDP)

Conclusion

Each logging framework is different, and while each could be best fit to specific projects, our recommendations are as follows. Nothing beats the performance of native syslog for system admins who know their way around syslogd or syslog-ng daemons, or to forward logs to a cloud service such as Loggly. If what’s needed is a simple, yet powerful way to log locally to files, KLogger offers PSR-3 compliance and is almost as fast as native error_log, although Log4php does seem to edge it out when the app is running on Apache. For a more complete framework, Monolog seems to be the more well-rounded option, particularly when considering remote logging via TCP/IP.

 

After deciding on a logging framework, your next big decision is choosing a log management solution. Loggly provides unified log analysis and monitoring for all your servers in a single place. You can configure your PHP servers to forward syslog to Loggly or simply use Monolog’s LogglyHandler, which is easy to set up in your app’s code. Try Loggly for free and take control over your PHP application logs.

Page load time is inversely related to page views and conversion rates. While probably not a controversial statement, as the causality is intuitive, there is empirical data from industry leaders such as Amazon, Google, and Bing to back it in High Scalability and O’Reilly’s Radar, for example.

 

As web technology has become much more complex over the last decade, the issue of performance has remained a challenge as it relates to user experience. Fast forward to 2018, and UX is identified as a key requirement for business success by CIOs and CDOs.

 

In today’s growing ecosystem of competing web services, the undeniable reality remains that performance impacts business and it can represent a major competitive (dis)advantage. Whether your application relies on AWS, Azure, Heroku, Salesforce, Cloud Foundry, or any other SaaS platform, consider these five tips for monitoring SaaS services.

 

1. Realize the Importance of Monitoring

In case we haven’t established that app performance is critical for business success, let’s look at research done in the online retail sector.

 

“E-commerce sites must adopt a zero-tolerance policy for any performance issues that will impact customer experience [in order to remain competitive]” according to Retail Systems Research. Their conclusion is that performance management must shift from being considered an IT issue to being a business matter.

 

We can take this concept into more specific terms, as stated in our article series on Building a SaaS Service for an Unknown Scale. “Treat scalability and reliability as product features; this is the only way we can build a world-class SaaS application for unknown scale.”

LG ProactiveMonitoringSaaS BlogImage A
Data from Measuring the Business Impact of IT Through Application Performance (2015).

 

End users have come to expect very fast, real-time-like interaction with most software, regardless of the system complexities behind the scenes. This means that commercial applications and SaaS services need to be built and integrated with performance in mind at all times. And so, knowing how to measure their performance from day one is paramount. Logs extend application performance monitoring (APM) by giving you deeper insights into the causes of performance problems as well as application errors that can cause user experience problems.

 

2. Incorporate a Monitoring Strategy Early On

In today’s world, planning for your SaaS service’s successful adoption to take time (and thus worrying about its performance and UX later) is like selling 100 tickets to a party but only beginning preparations on the day of the event. Needless to say, such a plan is prone to produce disappointed customers, and it can even destroy a brand. Fortunately, with SaaS monitoring solutions like SolarWinds® Loggly®, it’s not time-consuming or expensive to implement monitoring.

 

In fact, letting scalability become a bottleneck is the first of Six Critical SaaS Engineering Mistakes to Avoid we published some time ago. We recommend defining realistic adoption goals and scenarios in early project stages, and to map them into performance, stress, and capacity testing. To realize these tests, you’ll need to be able to monitor specific app traffic, errors, user engagement, and other metrics that tech and business teams need to define together.

 

A good place to start is with the Four Golden Signals described by Google’s Monitoring Distributed Systems book chapter: Latency, Traffic, Errors, and Saturation. Finally, and most importantly from the business perspective, your key metrics can be used as service level indicators (SLI), which are measures of the service level provided to customers.

 

Based on your SLIs and adoption goals, you’ll be able to establish service level objectives (SLOs) so your ops team can target specific availability levels (uptime and performance). And, as a SaaS service provider, you should plan to offer service level agreement (SLA). SLAs are contracts with your clients that specify what happens if you fail to meet non-functional requirements, and the terms are based on your SLOs, but can be negotiated with each client, of course. SLIs, SLOs, and SLAs are the basis for successful site reliability engineering (SRE).

LG ProactiveMonitoringSaaS BlogImage B
Apache Preconfigured Dashboards in Loggly can help you watch SLOs in a single click.

 

For a seamless understanding among tech and business leadership, key performance indicators (KPI) should be identified for various business stakeholders. KPIs should then be mapped to the performance metrics that compose each SLA (so they can be monitored). Defining a matrix of KPI vs. metrics vs. area of business impact as part of the business documentation is a good option. For example, a web conversion rate could map to page load time and number of outages, and impacts sales.

 

Finally, don’t forget to consider and plan for governance: roles and responsibilities around information (e.g., ownership, prioritization, and escalation rules). The RACI model can help you establish a clear matrix of which team is responsible, accountable, consulted, and informed when there are unplanned events emanating from or affecting business technology.

 

3. Have Application Logging as a Code Standard

Tech leadership should realize that the main function of logging begins after the initial development is complete. Good logging serves multiple purposes:

  1. Improving debugging during development iterations
  2. Providing visibility for tuning and optimizing complex processes
  3. Understanding and addressing failures of production systems
  4. Business intelligence

“The best SaaS companies are engineered to be data-driven, and there’s no better place to start than leveraging data in your logs.” (From the last of our SaaS Engineering Mistakes)

 

Best practices for logging is a topic that’s been widely written about. For example, see our article on best practices for creating logs. Here are a few guidelines from that and other sources:

  • Define logging goals and criteria to decide what to log. (Logging absolutely everything produces noise and is needlessly expensive.)
  • Log messages should contain data, context, and description. They need to be digestible (structured in a way that both humans and machines can read them).
  • Ensure that log messages are appropriate in severity using standard levels such as FATAL, ERROR, WARN, INFO, DEBUG, TRACE (See also Syslog facilities and levels).
  • Avoid side effects on the code execution. Particularly, don’t let logging halt your app by using non-blocking calls.
  • External systems: try logging all data that comes out from your application and gets in.
  • Use a standard log message format with clear key-value pairs and/or consider a known text standard format like JSON. (See figure 4 below.)
  • Support distributed logging: Centralize logs to a shareable, searchable platform such as Loggly.

Some of our sources include:

LG ProactiveMonitoringSaaS BlogImage C
Loggly automatically parses several log formats you can navigate with the Fields Explorer.

 

Every stage in the software development life cycle can be enriched by logs and other metrics. Implementation, integration, staging, and production deployment (especially rolling deploys) will particularly benefit from monitoring such metrics appropriately.

 

Logs constitute valuable data for your tech team, and invaluable data for your business. Now that you have rich information about the app that is generated in real-time, think about ways to put it in good use.

 

4. Automate Your Monitoring Configuration

Modern applications are deployed using infrastructure as code (IaC) techniques because they replace fragile server configuration with systems that can be easily torn down and restarted. If your team has made undocumented changes to servers and are too scared to shut them down, they are essentially “pet” servers.

 

If you manually deploy monitoring configuration on a per-server basis, then you have the potential to lose visibility when servers stop or when you add new ones. If you treat monitoring as something to be automatically deployed and configured, then you’ll get better coverage for less effort in the long run. This becomes even more important when testing new versions of your infrastructure or code, and when recovering from outages. Tools like Terraform, Ansible, Puppet, and CloudFormation can automate not just the deployment of your application but the monitoring of it as well.

Monitoring tools typically have system agents that can be installed on your infrastructure to begin streaming metrics into their service. In the case of applications built on SaaS platforms, there are convenient integrations that plug into well-known ecosystems. For example, Loggly streams and centralizes logs as metrics, and supports dozens of out-of-box systems, including the Amazon Cloudwatch and Heroku PaaS platforms.

 

5. Use Alerts on Your Key Metrics

Monitoring solutions like Loggly can alert you in changes in your SLIs over time, such as your error rate. It can help you visually identify the types of errors that occur and when they start. This will help identify root causes and fix problems faster, minimizing impact to user experience.

LG ProactiveMonitoringSaaS BlogImage D
Loggly Chart of application errors split by errorCode.

 

Custom alerts can be created from saved log searches, which act as key metrics of your application’s performance. Loggly even lets you integrate alerts to incident management systems like PagerDuty and OpsGenie.

LG ProactiveMonitoringSaaS BlogImage E
Adding an alert from a Syslog error log search in Loggly.

 

In conclusion, monitoring your SaaS service performance is very important because it significantly impacts your business’ bottom line. This monitoring has to be planned for, applied early on, and instrumented for all the stages in the SDLC.

 

Additionally, we explained how and why correct logging is one of the best sources for key metrics to measure your monitoring goals during development and production of your SaaS service. Proper logging on an easy-to-use platform such as Loggly will also help your business harness invaluable intel in real time. You can leverage these streams of information for tuning your app, improving your service, and to discover new revenue models.

 

Sign up for a free 14-day trial of SolarWinds Loggly to start doing logging right today, and move your SaaS business into the next level of performance control and business intelligence.

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.