It’s easy to recognize problems in Ruby on Rails, but finding each problem’s source can be a challenging task. A problem due to an unexpected event could result in hours of searching through log files and attempting to reproduce the issue. Poor logs will leave you searching, while a helpful log can assist you in finding the cause right away.

 

Ruby on Rails applications automatically create and maintain the basic text logs for each environment, such as development, staging, and production. You can easily format and add extra information to the logs using open-source logging libraries, such as Lograge and Fluentd. These libraries effectively manage small applications, but as you scale your application across many servers, developers need to aggregate logs to troubleshoot problems across all of them.

 

In this tutorial, we will show you how Ruby on Rails applications handle logging natively. Then, we’ll show you how to send the logs to SolarWinds® Papertrail™. This log management solution enables you to centralize your logs in the cloud and provides helpful features like fast search, alerts, and more.

 

Ruby on Rails Native Logging

Ruby offers a built-in logging system. To use it, simply include the following code snippet in your environment.rb (development.rb/production.rb). You can find environments under the config directory of the root project.

config.logger = Logger.new(STDOUT)

Or you can include the following in an initializer:

Rails.logger = Logger.new(STDOUT)

By default, each log is created under #{Rails.root}/log/ and the log file is
named after the environment in which the application is running. The default format gives basic information that includes the date/time of log generation and description (message or exception) of the log.

D, [2018-08-31T14:12:44.116332 #28944] DEBUG -- : Debug message I, [2018-08-31T14:12:44.117330 #28944]  INFO -- : Test message F, [2018-08-31T14:12:44.118348 #28944] FATAL -- : Terminating application, raised unrecoverable error!!! F, [2018-08-31T14:12:44.122350 #28944] FATAL -- : Exception (something bad happened!):

Each log line also includes the severity, otherwise known as log level. The log levels enable you to filter the logs when outputting them or when monitoring problems, such as errors or fatal. The available log levels are :debug, :info, :warn, :error, :fatal, and :unknown. These are
converted to uppercase when output in the log file.

 

Formatting Logs Using Lograge

The default logging in Ruby on Rails during development or in production can be noisy, as you can see below. It also records a limited amount of
information for each page view.

I, [2018-08-31T14:37:44.588288 #27948]  INFO -- : method=GET path=/ format=html controller=Rails::WelcomeController action=index status=200 duration=105.06 view=51.52 db=0.00 params={'controller'=>'rails/welcome', 'action'=>'index'} headers=#<ActionDispatch::Http::Headers:0x046ab950> view_runtime=51.52 db_runtime=0

Lograge adds extra detail and uses a format that is less human readable, but more useful for large-scale analysis through its JSON output option. JSON makes it easier to search, filter, and summarize large volumes of logs. The discrete fields facilitate the process of searching through logs and filtering for the information you need.

I, [2018-08-31T14:51:54.603784 #17752]  INFO -- : {'method':'GET','path':'/','format':'html','controller':'Rails::WelcomeController','action':'index','status':200,'duration':104.06,'view':51.99,'db':0.0,'params':{'controller':'rails/welcome','action':'index'},'headers':'#<ActionDispatch::Http::Headers:0x03b75520>','view_runtime':51.98726899106987,'db_runtime':0}

In order to configure Lograge in a Ruby on Rails app, you need to follow some simple steps:

Step 1: Find the Gemfile under the project root directory and add the following gem.

gem 'lograge'

Step 2: Enable Lograge in each relevant environment (development, production, staging) or in the initializer. You can find all those environments under the config directory of your project. To find the initializer, open up the config directory of your project.

# config/initializers/lograge.rb# OR# config/environments/production.rbRails.application.configure do  config.lograge.enabled = trueend

Step 3: If you’re using Rails 5’s API-only mode and inherit from ActionController::API, you must define it as the controller base class that Lograge will patch:

# config/initializers/lograge.rbRails.application.configure do  config.lograge.base_controller_class = 'ActionController::API'end

With Lograge, you can include additional attributes in log messages, like user ID or request ID, host, source IP, etc. You can read the Lograge documentation to get more information.


Here’s a simple example that captures three attributes:

class ApplicationController < ActionController::Base  before_action :append_info_to_payload  def append_info_to_payload(payload)    super    payload[:user_id] = current_user.try(:id)    payload[:host] = request.host    payload[:source_ip] = request.remote_ip  endend

The above three attributes are logged in environment.rb (production.rb/development.rb) with this block.

config.lograge.custom_options = lambda do |event|  event.payloadend

Troubleshoot Problems Faster Using Papertrail

Papertrail is a popular cloud-hosted log management service that integrates with different logging library solutions. It is easier to centralize all your Ruby on Rails log management in the cloud. You can quickly track real-time activity, making it easier to identify and troubleshoot real-time production applications.

 

Papertrail provides numerous features for handling Ruby on Rails log files, including:

 

Instant log visibility: Papertrail provides fast search and team-wide
access. It also provides analytics reporting and webhook monitoring, which
can be set up typically in less than a minute.

 

Aggregate logs: : Papertrail aggregates logs across your entire deployment, making them available from a single location. It provides you with an easy way to access logs, including application logs, database logs, Apache logs, and more.

2018-10-03-viewer

 

Tail and search logs: Papertrail lets you tail logs in real time from
multiple devices. With the help of advanced searching and filtering tools, you can quickly troubleshoot issues in a production environment.

 

Proactive alert notifications: Almost every application has critical events
that require human attention. That’s precisely why alerts exist. Papertrail gives you the ability to receive alerts via email, Slack, Librato®, PagerDuty, or any custom HTTP webhooks of your choice.

2018-10-03-edit-alert

 

Log archives: You can load the Papertrail log archives into third-party utilities, such as Redshift or Hadoop.

 

Logs scalability: With Papertrail, you can scale your log volume and desired searchable duration.

 

Encryption: For your security, Papertrail supports optional TLS encryption
and certificate-based destination host verification.

Configuring Ruby on Rails to Send Logs to Papertrail

It’s an easy task to get started with Papertrail. If you already have log files,
you can send them to Papertrail using Nxlog or remote_syslog2. This utility will monitor the log files and send new logs to Papertrail. Next, we’ll show you how to send events asynchronously from Ruby on Rails using the remote_syslog_logger.

Add the remote_syslog_logger to your Gemfile. If you are not using a Gemfile, run the following script:

$ gem install remote_syslog_logger

Change the environment configuration file to log via remote_syslog_logger. This is almost always in config/environment.rb (to affect all environments) or config/environments/<environment name>.rb, such as config/environments/production.rb (to affect only a specific environment). Update the host and port to the ones given to you in your Papertrail log destination settings.

config.logger = RemoteSyslogLogger.new('logsN.papertrailapp.com', XXXXX)

It’s that simple! Your logs should now be sent to Papertrail.

 

Papertrail is designed to help you troubleshoot customer problems, resolve error messages, improve slow database queries, and more. It gives you analytical tools to help identify and resolve system anomalies and potential security issues. Learn more about how Papertrail can give you frustration-free log management in the cloud, and sign up for a trial or the free plan to get started.

Slow websites on your mobile device are frustrating when you’re trying to look up something quickly. When a page takes forever to load, it’s often due to a spotty network connection or a website that is overly complicated for a phone. Websites that load many images or videos can also eat up your data plan. Most people have a monthly cap on the amount of data they can use, and it can be expensive to pay an overage fee or upgrade your plan.

 

Can switching to a different browser app truly help websites load faster and use less data? We’ll put the most popular mobile browsers to the test to see which is the fastest and uses the least data. Most people use their phone’s default browser app, like Safari on iPhone or Chrome on Android. Other browsers, like Firefox Focus and Puffin, claim to better at saving data. Let’s see which one comes out on top with our testing.

 

How We Benchmark

We’ll look specifically at page-load performance by testing three popular websites with different styles of content. The first will be the Google home page, which should load quickly as Google designed it to be fast. Next, we’ll measure the popular social media website Reddit. Lastly, we’ll test BuzzFeed, a complex website with many ads and trackers.

 

To conduct these tests, we’ll use an Apple iPhone 7. (We may look at other phones such as Android in future articles.) We’ll use the browsers with default settings and clear any private browsing data so cached data won’t change the results.

 

Since we don’t have access to the browser developer tools we’d typically have on a desktop, we’ll need to use a different technique. One way is to time how long it takes to download the page, but some websites preload data in the background to make your next click load faster. From the user’s perspective, this shouldn’t count toward the page-load time because it happens behind the scenes. A better way is to record a video of each page loading. We can then play them back and see how long each took to load all the visible content.

 

To see how much data each browser used, we’ll use something called a “proxy server” to monitor the phone’s connections. Normally, phones load data directly through the cellular carrier’s LTE connection or through a router’s Wi-Fi connection. A proxy server acts like a man in the middle, letting us count how much data passes between the website and the phone. It also lets us see which websites it loaded data from and even the contents of the data.

 

We’ll use the proxy server software called Fiddler. This tool also allows enables us to decrypt the HTTPS connection to the website and spy on exactly which data is being sent. We configured it for iOS by installing a root certificate on our phone, which the computer can use to decrypt the data. Fiddler terminates the SSL connection with the external website, then encrypts the data to our phone using its own root certificate. It allows us to see statistics on which sites were visited, which assets where loaded, and more.


©2015 Telerik

 

The Puffin browser made things more challenging because we were unable to see the contents of pages after installing the Fiddler root certificate. It’s possible Puffin uses a technique called certificate pinning. Nevertheless, we were still able to see the number of bytes being sent over the connection to our phone and which servers it connected to.

 

Which Browser has the Best Mobile Performance?

Here are the results of measuring the page-load time for each of the mobile browsers against our three chosen websites. Faster page load times are better.

BrowserGoogle.comReddit.comBuzzfeed.com
Safari3.48s5.50s8.67s
Chrome1.03s4.93s5.93s
Firefox1.89s3.47s3.50s
Firefox focus2.67s4.90s5.70s
Puffin0.93s2.20s2.40s

 

The clear winner in the performance category is Puffin, which loaded pages about twice as fast as most other browsers. Surprisingly, it even loaded Google faster than Chrome. Puffin claims the speed is due to a proprietary compression technology. Most modern browsers support gzip compression, but it’s up to site operators to enable it. Puffin can compress all content by passing it through its own servers first. It can also downsize images and videos so they load faster on mobile.

 

Another reason Puffin was so much faster is because it connected to fewer hosts. Puffin made requests to only 14 hosts, whereas Safari made requests to about 50 hosts. Most of those extra hosts are third-party advertisement and tracking services. Puffin was able to identify them and either remove them from the page or route calls through its own, faster servers at cloudmosa.net.

PuffinSafari
vid.buzzfeed.com: 83img.buzzfeed.com: 51
google.com: 9www.google-analytics.com: 16
www.google.com: 2www.buzzfeed.com: 14
en.wikipedia.org: 2tpc.googlesyndication.com: 9
pointer2.cloudmosa.net: 2securepubads.g.doubleclick.net: 7
data.flurry.com: 2pixiedust.buzzfeed.com: 7
www.buzzfeed.com: 2vid.buzzfeed.com: 6
pivot-ha2.cloudmosa.net: 1cdn-gl.imrworldwide.com: 6
p40-buy.itunes.apple.com: 1www.facebook.com: 6
gd11.cloudmosa.net: 1sb.scorecardresearch.com: 3
gd10.cloudmosa.net: 1cdn.smoot.apple.com: 3
gd9.cloudmosa.net: 1pagead2.googlesyndication.com: 3
collector.cloudmosa.net: 1video-player.buzzfeed.com: 3
www.flashbrowser.com: 1gce-sc.bidswitch.net: 3
secure-dcr.imrworldwide.com: 3
connect.facebook.net: 3
events.redditmedia.com: 3
s3.amazonaws.com: 2
thumbs.gfycat.com: 2
staticxx.facebook.com: 2
id.rlcdn.com: 2
i.redditmedia.com: 2
googleads.g.doubleclick.net: 2
videoapp-assets-ak.buzzfeed.com: 2
c.amazon-adsystem.com: 2
buzzfeed-d.openx.net: 2
pixel.quantserve.com: 2
… 20 more omitted

 

It’s great Puffin was able to load data so quickly, but it raises some privacy questions. Any users of this browser are giving CloudMosa access to their entire browsing history. While Firefox and Chrome let you opt out of sending usage data, Puffin does not. In fact, it’s not possible to turn this tracking off without sacrificing the speed improvements. The browser is supported by ads, although its privacy policy claims it doesn’t keep personal data. Each user will have to decide if he or she is comfortable with this arrangement.

 

Which Browser Uses the Least Mobile Data?

Now let’s look at the amount of data each browser uses. Again, we see surprising results:

BrowserGoogle.comReddit.comBuzzfeed.com
Safari0.82MB2.89MB4.22MB
Chrome0.81MB2.91MB5.46MB
Firefox0.82MB2.62MB3.15MB
Firefox Focus0.79MB2.61MB3.13MB
Puffin0.54MB0.17MB42.2MB

 

Puffin was the clear leader for loading google.com and it dominated reddit.com by a factor of 10. It claims it saved 97% of data usage on reddit.com.


©2015 Telerik

 

However, Puffin lost on buzzfeed.com by a factor of 10. In Fiddler, we saw that it made 83 requests to vid.buzzfeed.com. It appears it was caching video data in the background so videos would play faster. While doing so saves the user time, it ends up using way more data. On a cellular plan, this approach could quickly eat up a monthly cap.

 

As a result, Firefox Focus came in the lead for data usage on buzzfeed.com. Since Firefox Focus is configured to block trackers by default, it was able to load the page using the least amount of mobile data. It was also able to avoid making requests to most of the trackers listed in the Buzzfeed section above. In fact, if we take away Puffin, Firefox Focus came in the lead consistently for all the pages. If privacy is important, Firefox Focus could be a great choice for you.

 

How to Test Your Website Performance

Looking at the three websites we tested, we see an enormous difference in the page-load time in the amount of data used. It matters because higher page-load time is correlated to higher bounce rates and even lower online purchases.

 

Pingdom® makes it even easier to test your own website’s performance with page speed monitoring. It gives you a report card showing how your website compares with others in terms of load time and page size.

To get a better idea of your customer’s experience, you can see a film strip showing how content on the page loads over time. Below, we can see that Reddit takes about two seconds until it’s readable. If we scroll over, we’d see it takes about six seconds to load all the images.

The SolarWinds® Pingdom® solution also allows us to dive deeper into a timeline view showing exactly which assets were loaded and when. The timeline view helps us see if page assets are loading slowly because of network issues or their size, or because third parties are responding slowly. The view will give us enough detail to go back to the engineering team with quantifiable data.

Pingdom offers a free version that gives you a full speed report and tons of actionable insights. The paid version also gives you the filmstrip, tracks changes over time, and offers many more website monitoring tools.

 

Conclusion

The mobile browser you choose can make a big difference in terms of page-load time and data usage. We saw that the Puffin browser was able to load pages much faster than the default Safari browser on an Apple iPhone 7. Puffin also used less data to load some, but not all, pages. However, for those who care about privacy and saving data on their mobile plan, Firefox Focus may be your best bet.

 

Because mobile performance is so important for customers, you can help improve your own website using the Pingdom page speed monitoring solution. This tool will give you a report card to share with your team and specific actions you can take to make your site faster.

We’re no strangers to logging from Docker containers here at SolarWinds® Loggly®. In the past, we’ve demonstrated different techniques for logging individual Docker containers. But while logging a handful of containers is easy, what happens when you start deploying dozens, hundreds, or thousands of containers across different machines?

In this post, we’ll explore the best practices for logging applications deployed using Docker Swarm.

Intro to Docker Swarm

Docker Swarm is a container orchestration and clustering tool from the creators of Docker. It allows you to deploy container-based applications across a number of computers running Docker. Swarm uses the same command-line interface (CLI) as Docker, making it more accessible to users already familiar with Docker. And as the second most popular orchestration tool behind Kubernetes, Swarm has a rich ecosystem of third-party tools and integrations.

A swarm consists of manager nodes and worker nodes. Managers control how containers are deployed, and workers run the containers. In Swarm, you don’t interact directly with containers, but instead define services that define what the final deployment will look like. Swarm handles deploying, connecting, and maintaining these containers until they meet the service definition.

For example, imagine you want to deploy an Nginx web server. Normally, you would start an Nginx container on port 80 like so:

$ docker run --name nginx --detach --publish 80:80 nginx

With Swarm, you instead create a service that defines what image to use, how many replica containers to create, and how those containers should interact with both the host and each other. For example, let’s deploy an Nginx image with three containers (for load balancing) and expose it over port 80.

$ docker service create --name nginx --detach --publish 80:80 --replicas 3 nginx

When the deployment is done, you can access Nginx using the IP address of any node in the Swarm.

Best Practices for Logging in Docker Swarm 1
© 2011-2018 Nginx, Inc.

To learn more about Docker services, see the services documentation.

 

The Challenges of Monitoring and Debugging Docker Swarm

Besides the existing challenges in container logging, Swarm adds another layer of complexity: an orchestration layer. Orchestration simplifies deployments by taking care of implementation details such as where and how containers are created. But if you need to troubleshoot an issue with your application, how do you know where to look? Without comprehensive logs, pinpointing the exact container or service where an error occurred can become an operational nightmare.

On the container side, nothing much changes from a standard Docker environment. Your containers still send logs to stdout and stderr, which the host Docker daemon accesses using its logging driver. But now your container logs include additional information, such as the service that the container belongs to, a unique container ID, and other attributes auto-generated by Swarm.

Consider the Nginx example. Imagine one of the containers stops due to a configuration issue. Without a monitoring or logging solution in place, the only way to know this happened is by connecting to a manager node using the Docker CLI and querying the status of the service. And while Swarm automatically groups log messages by service using the docker service logs command, searching for a specific container’s messages can be time-consuming because it only works when logged in to that specific host.

 

How Docker Swarm Handles Logs

Like a normal Docker deployment, Swarm has two primary log destinations: the daemon log (events generated by the Docker service), and container logs (events generated by containers). Swarm doesn’t maintain separate logs, but appends its own data to existing logs (such as service names and replica numbers).

The difference is in how you access logs. Instead of showing logs on a per-container basis using docker logs <container name>, Swarm shows logs on a per-service basis using docker service logs <service name>. This aggregates and presents log data from all of the containers running in a single service. Swarm differentiates containers by adding an auto-generated container ID and instance ID to each entry.

For example, the following message was generated by the second container of the nginx_nginx service, running on swarm-client1.

# docker service logs nginx_nginx  nginx_nginx.2.subwnbm15l3f@swarm-client1 | 10.255.0.2 - - [01/Jun/2018:22:21:11 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0" "-"

To learn more about the logs command, see the Docker documentation.

 

Options for Logging in Swarm

Since Swarm uses Docker’s existing logging infrastructure, most of the standard Docker logging techniques still apply. However, to centralize your logs, each node in the swarm will need to be configured to forward both daemon and container logs to the destination. You can use a variety of methods such as Logspout, the daemon logging driver, or a dedicated logger attached to each container.

 

Best Practices to Improve Logging

To log your swarm services effectively, there are a few steps you should take.

 

1. Log to STDOUT and STDERR in Your Apps

Docker automatically forwards all standard output from containers to the built-in logging driver. To take advantage of this, applications running in your Docker containers should write all log events to STDOUT and STDERR. If you try to log from within your application, you risk losing crucial data about your deployment.

 

2. Log to Syslog Or JSON

Syslog and JSON are two of the most commonly supported logging formats, and Docker is no exception. Docker stores container logs as JSON files by default, but it includes a built-in driver for logging to Syslog endpoints. Both JSON and Syslog messages are easy to parse, contain critical information about each container, and are supported by most logging services. Many container-based loggers such as Logspout support both JSON and Syslog, and Loggly has complete support for parsing and indexing both formats.

 

3. Log to a Centralized Location

A major challenge in cluster logging is tracking down log files. Services could be running on any one of several different nodes, and having to manually access log files on each node can become unsustainable over time. Centralizing logs lets you access and manage your logs from a single location, reducing the amount of time and effort needed to troubleshoot problems.

One common solution for container logs is dedicated logging containers. As the name implies, dedicated logging containers are created specifically to gather and forward log messages to a destination such as a syslog server. Dedicated containers automatically collect messages from other containers running on the node, making setup as simple as running the container.

 

Why Loggly Works for Docker Swarm

Normally you would access your logs by connecting to a master node, running docker service logs <service name>, and scrolling down to find the logs you’re looking for. Not only is this labor-intensive, but it’s slow because you can’t easily search, and it’s difficult to automate with alerts or create graphs. The more time you spend searching for logs, the longer problems go unresolved. This also means creating and maintaining your own log centralization infrastructure, which can become a significant project on its own.

Loggly is a log aggregation, centralization, and parsing service. It provides a central location for you to send and store logs from the nodes and containers in your swarm. Loggly automatically parses and indexes messages so you can search, filter, and chart logs in real-time. Regardless of how big your swarm is, your logs will be handled by Loggly.

 

Sending Swarm Logs to Loggly

The easiest way to send your container logs to Loggly is with Logspout. Logspout is a container that automatically routes all log output from other containers running on the same node. When deploying the container in global mode, Swarm automatically creates a Logspout container on each node in the swarm.

 

To route your logs to Loggly, provide your Loggly Customer Token and a custom tag, then specify a Loggly endpoint as the logging destination.

# docker service create --name logspout --mode global --detach --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/etc/hostname:/etc/host_hostname:ro -e SYSLOG_STRUCTURED_DATA="<Loggly Customer Token>@41058 tag=\"<custom tag>\"" gliderlabs/logspout syslog+tcp://logs-01.loggly.com:514

You can also define a Logspout service using Compose.

#

 docker-compose-logspout.yml  version: "3"  networks:   logging:  services:   logspout:     image: gliderlabs/logspout     networks:       - logging     volumes:       - /etc/hostname:/etc/host_hostname:ro       - /var/run/docker.sock:/var/run/docker.sock     environment:       SYSLOG_STRUCTURED_DATA: "<Loggly Customer Token>@41058"       tag: "<custom tag>"     command: syslog+tcp://logs-01.loggly.com:514     deploy:       mode: global

Use docker stack deploy to deploy the Compose file to your swarm. <stack name> is the name that you want to give to the deployment.

# docker stack deploy --compose-file docker-compose-logspout.yml <stack name>

As soon as the deployment is complete, messages generated by your containers start appearing in Loggly.
Best Practices for Logging in Docker Swarm 2

Configuring Dashboards and Alerts

Since Swarm automatically appends information about the host, service, and replica to each log message, we can create Dashboards and Alerts similar to those for a single-node Docker deployment. For example, Loggly automatically breaks down logs from the Nginx service into individual fields.

Best Practices for Logging in Docker Swarm 3

We can create Dashboards that show, for example, the number of errors generated on each node, as well as the container activity level on each node.

Best Practices for Logging in Docker Swarm 4

Alerts are useful for detecting changes in the status of a service. If you want to detect a sudden increase in errors, you can easily create a search that scans messages from a specific service for error-level logs.

Best Practices for Logging in Docker Swarm 5

You can select this search from the Alerts screen and specify a threshold. For example, this alert triggers if the Nginx service logs more than 10 errors over a 5-minute period.

Best Practices for Logging in Docker Swarm 6

Conclusion

While Swarm can add a layer of complexity over a typical Docker installation, logging it doesn’t have to be difficult. Tools like Logspout and Docker logging drivers have made it easier to collect and manage container logs no matter where those containers are running. And with Loggly, you can easily deploy a complete, cluster-wide logging solution across your entire environment.

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.