Monitoring Central

5 Posts authored by: vitosalvaggio Employee

Slow websites on your mobile device are frustrating when you’re trying to look up something quickly. When a page takes forever to load, it’s often due to a spotty network connection or a website that is overly complicated for a phone. Websites that load many images or videos can also eat up your data plan. Most people have a monthly cap on the amount of data they can use, and it can be expensive to pay an overage fee or upgrade your plan.

 

Can switching to a different browser app truly help websites load faster and use less data? We’ll put the most popular mobile browsers to the test to see which is the fastest and uses the least data. Most people use their phone’s default browser app, like Safari on iPhone or Chrome on Android. Other browsers, like Firefox Focus and Puffin, claim to better at saving data. Let’s see which one comes out on top with our testing.

 

How We Benchmark

We’ll look specifically at page-load performance by testing three popular websites with different styles of content. The first will be the Google home page, which should load quickly as Google designed it to be fast. Next, we’ll measure the popular social media website Reddit. Lastly, we’ll test BuzzFeed, a complex website with many ads and trackers.

 

To conduct these tests, we’ll use an Apple iPhone 7. (We may look at other phones such as Android in future articles.) We’ll use the browsers with default settings and clear any private browsing data so cached data won’t change the results.

 

Since we don’t have access to the browser developer tools we’d typically have on a desktop, we’ll need to use a different technique. One way is to time how long it takes to download the page, but some websites preload data in the background to make your next click load faster. From the user’s perspective, this shouldn’t count toward the page-load time because it happens behind the scenes. A better way is to record a video of each page loading. We can then play them back and see how long each took to load all the visible content.

 

To see how much data each browser used, we’ll use something called a “proxy server” to monitor the phone’s connections. Normally, phones load data directly through the cellular carrier’s LTE connection or through a router’s Wi-Fi connection. A proxy server acts like a man in the middle, letting us count how much data passes between the website and the phone. It also lets us see which websites it loaded data from and even the contents of the data.

 

We’ll use the proxy server software called Fiddler. This tool also allows enables us to decrypt the HTTPS connection to the website and spy on exactly which data is being sent. We configured it for iOS by installing a root certificate on our phone, which the computer can use to decrypt the data. Fiddler terminates the SSL connection with the external website, then encrypts the data to our phone using its own root certificate. It allows us to see statistics on which sites were visited, which assets where loaded, and more.


©2015 Telerik

 

The Puffin browser made things more challenging because we were unable to see the contents of pages after installing the Fiddler root certificate. It’s possible Puffin uses a technique called certificate pinning. Nevertheless, we were still able to see the number of bytes being sent over the connection to our phone and which servers it connected to.

 

Which Browser has the Best Mobile Performance?

Here are the results of measuring the page-load time for each of the mobile browsers against our three chosen websites. Faster page load times are better.

BrowserGoogle.comReddit.comBuzzfeed.com
Safari3.48s5.50s8.67s
Chrome1.03s4.93s5.93s
Firefox1.89s3.47s3.50s
Firefox focus2.67s4.90s5.70s
Puffin0.93s2.20s2.40s

 

The clear winner in the performance category is Puffin, which loaded pages about twice as fast as most other browsers. Surprisingly, it even loaded Google faster than Chrome. Puffin claims the speed is due to a proprietary compression technology. Most modern browsers support gzip compression, but it’s up to site operators to enable it. Puffin can compress all content by passing it through its own servers first. It can also downsize images and videos so they load faster on mobile.

 

Another reason Puffin was so much faster is because it connected to fewer hosts. Puffin made requests to only 14 hosts, whereas Safari made requests to about 50 hosts. Most of those extra hosts are third-party advertisement and tracking services. Puffin was able to identify them and either remove them from the page or route calls through its own, faster servers at cloudmosa.net.

PuffinSafari
vid.buzzfeed.com: 83img.buzzfeed.com: 51
google.com: 9www.google-analytics.com: 16
www.google.com: 2www.buzzfeed.com: 14
en.wikipedia.org: 2tpc.googlesyndication.com: 9
pointer2.cloudmosa.net: 2securepubads.g.doubleclick.net: 7
data.flurry.com: 2pixiedust.buzzfeed.com: 7
www.buzzfeed.com: 2vid.buzzfeed.com: 6
pivot-ha2.cloudmosa.net: 1cdn-gl.imrworldwide.com: 6
p40-buy.itunes.apple.com: 1www.facebook.com: 6
gd11.cloudmosa.net: 1sb.scorecardresearch.com: 3
gd10.cloudmosa.net: 1cdn.smoot.apple.com: 3
gd9.cloudmosa.net: 1pagead2.googlesyndication.com: 3
collector.cloudmosa.net: 1video-player.buzzfeed.com: 3
www.flashbrowser.com: 1gce-sc.bidswitch.net: 3
secure-dcr.imrworldwide.com: 3
connect.facebook.net: 3
events.redditmedia.com: 3
s3.amazonaws.com: 2
thumbs.gfycat.com: 2
staticxx.facebook.com: 2
id.rlcdn.com: 2
i.redditmedia.com: 2
googleads.g.doubleclick.net: 2
videoapp-assets-ak.buzzfeed.com: 2
c.amazon-adsystem.com: 2
buzzfeed-d.openx.net: 2
pixel.quantserve.com: 2
… 20 more omitted

 

It’s great Puffin was able to load data so quickly, but it raises some privacy questions. Any users of this browser are giving CloudMosa access to their entire browsing history. While Firefox and Chrome let you opt out of sending usage data, Puffin does not. In fact, it’s not possible to turn this tracking off without sacrificing the speed improvements. The browser is supported by ads, although its privacy policy claims it doesn’t keep personal data. Each user will have to decide if he or she is comfortable with this arrangement.

 

Which Browser Uses the Least Mobile Data?

Now let’s look at the amount of data each browser uses. Again, we see surprising results:

BrowserGoogle.comReddit.comBuzzfeed.com
Safari0.82MB2.89MB4.22MB
Chrome0.81MB2.91MB5.46MB
Firefox0.82MB2.62MB3.15MB
Firefox Focus0.79MB2.61MB3.13MB
Puffin0.54MB0.17MB42.2MB

 

Puffin was the clear leader for loading google.com and it dominated reddit.com by a factor of 10. It claims it saved 97% of data usage on reddit.com.


©2015 Telerik

 

However, Puffin lost on buzzfeed.com by a factor of 10. In Fiddler, we saw that it made 83 requests to vid.buzzfeed.com. It appears it was caching video data in the background so videos would play faster. While doing so saves the user time, it ends up using way more data. On a cellular plan, this approach could quickly eat up a monthly cap.

 

As a result, Firefox Focus came in the lead for data usage on buzzfeed.com. Since Firefox Focus is configured to block trackers by default, it was able to load the page using the least amount of mobile data. It was also able to avoid making requests to most of the trackers listed in the Buzzfeed section above. In fact, if we take away Puffin, Firefox Focus came in the lead consistently for all the pages. If privacy is important, Firefox Focus could be a great choice for you.

 

How to Test Your Website Performance

Looking at the three websites we tested, we see an enormous difference in the page-load time in the amount of data used. It matters because higher page-load time is correlated to higher bounce rates and even lower online purchases.

 

Pingdom® makes it even easier to test your own website’s performance with page speed monitoring. It gives you a report card showing how your website compares with others in terms of load time and page size.

To get a better idea of your customer’s experience, you can see a film strip showing how content on the page loads over time. Below, we can see that Reddit takes about two seconds until it’s readable. If we scroll over, we’d see it takes about six seconds to load all the images.

The SolarWinds® Pingdom® solution also allows us to dive deeper into a timeline view showing exactly which assets were loaded and when. The timeline view helps us see if page assets are loading slowly because of network issues or their size, or because third parties are responding slowly. The view will give us enough detail to go back to the engineering team with quantifiable data.

Pingdom offers a free version that gives you a full speed report and tons of actionable insights. The paid version also gives you the filmstrip, tracks changes over time, and offers many more website monitoring tools.

 

Conclusion

The mobile browser you choose can make a big difference in terms of page-load time and data usage. We saw that the Puffin browser was able to load pages much faster than the default Safari browser on an Apple iPhone 7. Puffin also used less data to load some, but not all, pages. However, for those who care about privacy and saving data on their mobile plan, Firefox Focus may be your best bet.

 

Because mobile performance is so important for customers, you can help improve your own website using the Pingdom page speed monitoring solution. This tool will give you a report card to share with your team and specific actions you can take to make your site faster.

Have you ever wondered what happens when you type an address into your browser? The first step is the translation of a domain name (such as pingdom.com) to an IP address. Resolving domain names is done through a series of systems and protocols that make up the Domain Name System (DNS). Here we’ll break down what DNS is, and how it powers the underlying infrastructure of the internet.

 

What is DNS?

Traffic across the internet is routed by an identifier called an IP address. You may have seen IP addresses before. IPv4 addresses are a series of four numbers under 256, separated by periods (for example: 123.45.67.89).

 

IP addresses are at the core of communicating between devices on the internet, but they are hard to memorize and can change often, even for the same service. To get around these problems, we give names to IP addresses. For example, when you type https://www.pingdom.com into your web browser, it translates that name into an IP address, which your computer then uses to access a server that ultimately responds with the contents of the page that your browser displays. If a new server is put into place with a new IP address, that name can simply be updated to point to the new address.

 

These records are stored in the name server for a given name, or “zone,” in DNS parlance. These zones can include many different records and record types for the base name and subdomains in that zone.

 

The internet is decentralized, designed to withstand failure, and not rely on a single source of truth. DNS is built for this environment using recursion, which enables DNS servers to talk to each other to find the answer for a request. Each server is more authoritative than the last, until it reaches one of 13 “root” servers that are globally maintained as the definitive source for other DNS servers.

 

Anatomy of a DNS Request

When you type in “pingdom.com” to your browser and hit enter, your browser doesn’t directly ask the web servers for that page. First, a multi-step interaction with DNS servers must happen to translate pingdom.com into an IP address that is useable for establishing a connection and routing traffic. Here’s what that interaction looks like:

  1. Recursive DNS server requests abc.com from a DNS root server. The root server replies with the .com TLD name server IP address.
  2. Recursive DNS server requests abc.com from the .com TLD name server. The TLD name server replies with the authoritative name server for abc.com.
  3. Recursive DNS server requests abc.com from the abc.com nameserver. The nameserver replies with the IP address A record for abc.com. This IP address is returned to the client.
  4. Client requests abc.com using the web server’s IP address that was just resolved.

 

In subsequent requests, the recursive name server will have the IP address for pingdom.com.

This IP address is cached for a period of time determined by the pingdom.com nameserver. This value is called the time-to-live (TTL) for that domain record. A high TTL for a domain record means that local DNS resolvers will cache responses for longer and give quicker responses. However, making changes to DNS records can take longer due to the need to wait for all cached records to expire. Conversely, domain records with low TTLs can change much more quickly, but DNS resolvers will need to refresh their records more often.

 

Not Just for the Web

The DNS protocol is for anything that requires a decentralized name, not just the web. To differentiate between various types of servers registered with a nameserver, we use record types. For example, email servers are part of DNS. If a domain name has an MX record, it is signaling that the address associated with that record is an email server.

 

Some of the more common record types you will see are:

  • A Record – used to point names directly at IPv4 addresses. This is used by web browsers.
  • AAAA Record – used to point names directly at IPV6 addresses. This is used by web browsers when a device has an IPv6 network.
  • CNAME Record – also known as the Canonical Name record and is used to point web domains at other DNS names. This is common when using platforms as a service such as Heroku or cloud load balancers that provide an external domain name rather than an IP address.
  • MX Record – as mentioned before, MX records are used to point a domain to mail servers.
  • TXT Record – arbitrary information attached to a domain name. This can be used to attach validation or other information about a domain name as part of the DNS system. Each domain or subdomain can have one record per type, with the exception of TXT records.

 

DNS Security and Privacy

There are many parts to resolving a DNS request, and these parts are subject to security and privacy issues. First, how do we verify that the IP address we requested is actually the one on file with the domain’s root nameserver? Attacks exist that can disrupt the DNS chain, providing false information back to the client or triggering denial of service attacks upon sites. Untrusted network environments are vulnerable to man-in-the-middle attacks that can hijack DNS requests and provide back false results.

 

There is ongoing work to enhance the security of DNS with the Domain Name System Security Extensions (DNSSEC). This is a combination of new records, public-key cryptography, and establishing a chain of trust with DNS providers to ensure domain records have not been tampered with. Some DNS providers today offer the ability to enable DNSSEC, and its adoption is growing as DNS-based attacks become more prevalent.

 

DNS requests are also typically unencrypted, which allows attackers and observers to pry into the contents of a DNS request. This information is valuable, and your ISP or recursive zone provider may be providing this information to third parties or using it to track your activity. Furthermore, it may or may not contain personally identifiable information like your IP address, which can be correlated with other tracking information that third parties may be holding.

 

There are a few ways to help protect your privacy with DNS and prevent this sort of tracking:

 

1. Use a Trusted Recursive Resolver

Using a trusted recursive resolver is the first step to ensuring the privacy of your DNS requests. For example, the Cloudflare DNS service https://1.1.1.1is a fast, privacy-centric DNS resolver. Cloudflare doesn’t log IP addresses or track requests that you make against it at any time.

 

2. Use DNS over HTTPS (DoH)

DoH is another way of enhancing your privacy and security when interacting with DNS resolvers. Even when using a trusted recursive resolver, man-in-the-middle attacks can alter the returned contents back to the requesting client. DNSSEC offers a way to fix this, but adoption is still early, and relies on DNS providers to enable this feature.

 

DoH secures this at the client to DNS resolver level, enabling secure communication between the client and the resolver. The Cloudflare DNS service offers DNS over HTTPS, further enhancing the security model that their recursive resolver provides. Keep in mind that the domain you’re browsing is still available to ISPs thanks to Server Name Indication, but the actual contents, path, and other parts of the request are encrypted.

 

Even without DNSSEC, you can still have a more private internet experience. Firefox recently switched over to using the Cloudflare DNS resolver for all requests by default. At this time, DoH isn’t enabled by default unless you are using the nightly build.

 

Monitoring DNS Problems

DNS is an important part of your site’s availability because a problem can cause a complete outage. DNS has been known to cause outages due to BGP attacks, TLD outages, and other unexpected issues. It’s important your uptime or health check script includes DNS lookups.

Using SolarWinds® Pingdom®, we can monitor for DNS problems using the uptime monitoring tool. Here we will change the DNS record for a domain and show you how the Pingdom tool responds. Once you have an uptime check added in Pingdom, click the “Reports” section, and “Uptime” under that section, then go to your domain of interest. Under the “Test Result Log” tab for an individual domain’s uptime report, hover over the failing entry to see why a check failed.

This tells us that for our domain, we have a “Non-recoverable failure in name resolution.” This lets us know to check our DNS records. After we fix the problem, our next check succeeds:

Pingdom gives us a second set of eyes to make sure our site is still up as expected.

 

Curious to learn more about DNS? Check out our post on how to test your DNS-configuration. You can also learn more about Pingdom uptime monitoring.

Look back into almost any online technology businesses 10, or even 5 years ago and you’d see a clear distinction between what the CTO and CMO did in their daily roles. The former would oversee the building of technology and products whilst the latter would drive the marketing that brought in the customers to use said technology. In short, the two together took care of two very different sides of the same coin.

 

Marketing departments traditionally measure their success against KPIs such as the number of conversions a campaign brought in versus the cost of running it. Developers measure their performance on how quickly and effectively they develop new technologies.

 

Today, companies are shifting focus towards a customer-centric approach, where customer experience and satisfaction is paramount. After all, how your customers feel about your products can make, or break a business.

Performance diagnostic tools can help you optimize a slow web page but won’t show you whether your visitors are satisfied.

So where do the classic stereotypes that engineers only care about performance and marketers only care about profit fit into the customer-centric business model? The answer is they don’t: in a business where each department works against the same metrics — increasing their customers’ experience — having separate KPIs is as redundant as a trap door in a canoe.

 

The only KPI that matters is “are my customers happy?”

 

Developers + Marketing KPIs = True

With technology being integral to any online business, marketers are now in a position where we can gather so much data and in such detail that we are on the front line when it comes to gauging the satisfaction and experience of our customers. We can see what path a visitor took on our website, how long they took to complete their journey and whether they achieved what they set out to do.

 

Armed with this, we stand in a position to influence the technologies developers build and use.

 

Support teams, no longer confined to troubleshooting customer problems have become Customer Success teams, and directly impact on how developers build products, armed with first-hand data from their customers.

 

So as the lines blur between departments, it shouldn’t come as a surprise that engineering teams should care about marketing metrics. After all, if a product is only as effective as the people who use it, engineers build better products and websites when they know how customers intend to use them.

 

Collaboration is King

“How could engineers possibly make good use of marketing KPIs?” you might ask. After all, the two are responsible for separate ends of your business but can benefit from the same data.

 

Take a vital page on your business’s website: it’s not the fastest page on the net but its load time is consistent and it achieves its purpose: to convert your visitors to customers. Suddenly your bounce rate has shot up from 5% to 70%.

Ask an engineer to troubleshoot the issue and they might tell you that the page isn’t efficient. It takes 2.7 seconds to load, which is 0.7 seconds over the universal benchmark and what’s more is that some of the file sizes on your site are huge.

 

Ask a marketer the same question and they might tell you that the content is sloppy, making the purpose of the page unclear. The colors are off-brand and what’s more is that an important CTA is missing.

 

Even though both have been looking at the same page, they’ve come to two very different results, but the bottom line is that your customer doesn’t care about what went wrong. What matters is that the issue is identified and solved, quickly.

 

Unified Metrics Mean Unified Monitoring

Having unified KPIs across the various teams internal to your organisation means that they should all draw their data from the same source: a single, unified monitoring tool.

 

For businesses where the customer comes first, a new breed of monitoring is evolving that offers organizations this unified view, centred on how your customer experiences your site: Digital Experience Monitoring, or seeing as everything we do is digital, how about we just call it Experience Monitoring?

With Digital Experience Monitoring, your marketers and your engineering teams can follow a customer’s journey through your site, see how the navigated through it and where and why interest became a sale or a lost opportunity.

 

Let’s go back to our previous example: both your marketer and your engineer will see that although your bounce rate skyrocketed, the page load time and size stayed consistent. What they might also see is that onboarding you implemented that coincides with your bounce rate spike is confusing to your customers meaning that they leave, frustrated and unwilling to convert.

 

Digital Experience Monitoring gives a holistic view of your website’s health and helps you answer questions like:

  • Where your visitors come from
  • When are they visiting your site
  • What they visit and the journey they take to get there
  • How your site’s performance impacts on your visitors

By giving your internal teams access to the same metrics, you foster greater transparency across your organization which leads to faster resolution of issues, a deeper knowledge of your visitors and better insights into what your customers love about your products.

 

Pingdom’s Digital Experience Monitoring, Visitor Insights, bridges the gap between site performance and customer satisfaction, meaning you can guess less and know more about how your visitors experience your site.

Version 1.1 of the venerable HTTP protocol powered the web for 18 years.

 

Since then, websites have emerged from static, text-driven documents to interactive, media-rich applications. The fact that the underlying protocol remained unchanged throughout this time just goes to show how versatile and capable it is. But as the web grew bigger, its limitations became more obvious.

 

We needed a replacement, and we needed it soon.

 

Enter HTTP/2. Published in early 2015, HTTP/2 optimizes website connections without changing the existing application semantics. This means you can take advantage of HTTP/2’s features such as improved performance, updated error handling, reduced latency, and lower overhead without changing your web applications.

 

Today nearly 84% of modern browsers and 27% of all websites support HTTP/2, and those numbers are gradually increasing.

 

How is HTTP/2 Different from HTTP/1.1?

HTTP/2’s biggest changes impact the way data is formatted and transported between clients and servers.

 

Binary Data Format

HTTP/2 encapsulates data using a binary protocol. With HTTP/1.1, messages are transmitted in plaintext. This makes requests and responses easy to format and even read using a packet analysis tool, but results in increased size due to unnecessary whitespace and inefficient compression.

 

The benefit of a binary protocol is it allows for more compact, more easily compressible, and less error-prone transmissions.

 

Persistent TCP Connections

In early versions of HTTP, a new TCP connection had to be created for each request and response. HTTP/1.1 introduced persistent connections, allowing multiple requests and responses over a single connection. The problem was that messages were exchanged sequentially, with web servers refusing to accept new requests until previous requests were fulfilled.

 

HTTP/2 simplifies this by allowing for multiple simultaneous downloads over a single TCP connection. After a connection is established, clients can send new requests while receiving responses to earlier requests. Not only does this reduce the latency in establishing new connections, but servers no longer need to maintain multiple connections to the same clients.

 

Multiplexing

Persistent TCP connections paved the way for multiplexed transfers. With HTTP/2, multiple resources can be transferred simultaneously. Clients no longer need to wait for earlier resources to finish downloading before the next one begins. Website developers used workarounds such as domain sharding to “trick” browsers into opening multiple connections to a single host; however, this led to browsers opening multiple TCP connections. HTTP/2 makes this entire practice obsolete.

 

Header Compression and Reuse

In HTTP/1.1, headers are incompressible and repeated for each request. As the number of requests grows, so does the volume of duplicate header information. HTTP/2 eliminates redundant headers and compresses the remaining headers to drastically decrease the amount of data repeated during a session.

 

Server Push

Instead of waiting for clients to request resources, servers can now push resources. This allows websites to preemptively send content to users, minimizing wait times.

 

Does My Site Already Support HTTP/2?

Several major web servers and content delivery networks (CDNs) support HTTP/2. The fastest way to check if your website supports HTTP/2 is to navigate to the website in your browser and open Developer Tools. In Firefox and Chrome, press Ctrl-Shift-I or the F12 key and click the Network tab. Reload the page to populate the table with a list of responses. Right-click the column names in the table and enable the “Protocol” header. This column will show HTTP/2.0 in Firefox or h2 in Chrome if HTTP/2 is supported, or HTTP/1.1 if it’s not.

 

What is HTTP/2, and Will It Really Make Your Site Faster?
The network tab after loading 8bitbuddhism.com©. The website fully supports HTTP/2 as shown in the Protocol column.

 

Alternatively, KeyCDN provides a web-based HTTP/2 test tool. Enter the URL of the website you want to test, and the tool will report back on whether it supports HTTP/2.

 

How Do I Enable HTTP/2 on Nginx?

As of version 1.9.5, Nginx fully supports HTTP/2 via the ngx_http_v2 module. This module comes included in the pre-built packages for Linux and Windows. When building Nginx from source, you will need to enable this module by adding –with-http_v2_module as a configuration parameter.

You can enable HTTP/2 for individual server blocks. To do so, add http2 to the listen directive. For example, a simple Nginx configuration would look like this:

# nginx.conf
server {
listen 443 ssl http2;
server_name mywebsite.com;

root /var/www/html/mywebsite;
}

Although HTTP/2 was originally intended to require SSL, you can use it without SSL enabled. To apply the changes, reload the Nginx service using:

$ sudo service nginx reload

or by invoking the Nginx CLI using:

$ sudo /usr/sbin/nginx -s reload

Benchmarking HTTP/2

To measure the speed difference between HTTP/2 and HTTP/1.1, we ran a performance test on a WordPress site with and without HTTP/2 enabled. The site was hosted on a Google Compute Engine instance with 1 virtual CPU and 1.7 GB of memory. We installed WordPress 4.9.6 in Ubuntu 16.04.4 using PHP 7.0.30, MySQL 5.7.22, and Nginx 1.10.3.

 

To perform the test, we created a recurring page speed check in SolarWinds®Pingdom® to contact the site every 30 minutes. After four measurements, we restarted the Nginx server with HTTP/2 enabled and repeated the process. We then dropped the first measurement for each test (to allow Nginx to warm up), averaged the results, and took a screenshot of the final test’s Timeline.

 

 

The metrics we measured were:
  • Page size: the total combined size of all downloaded resources.
  • Load time: the time until the page finished loading completely.

 

Results Using HTTP/1.1

 

What is HTTP/2, and Will It Really Make Your Site Faster?
Timeline using HTTP/1.1

 

Results Using HTTP/2

 

What is HTTP/2, and Will It Really Make Your Site Faster?
Timeline using HTTP/2

 

And the Winner Is…

With just a simple change to the server configuration, the website performs noticeably better over HTTP/2 than HTTP/1.1. The page load time dropped by over 13% thanks to fewer TCP connections, resulting in a lower time to first byte. As a result of only using two TCP connections instead of four, we also reduced the time spent performing TLS handshakes. There was also a minor drop in overall file size due to HTTP/2’s more efficient binary data format.

 

Conclusion

HTTP/2 is already proving to be a worthy successor to HTTP/1.1. A large number of projects have implemented it and, with the exception of Opera Mini and UC for Android, mainstream browsers already support it. Whether it can handle the next 18 years of web evolution has yet to be seen, but for now, it’s given the web a much-needed performance boost.

 

You can try this same test on your own website using the Pingdom page speed check. Running the page speed check will show you the size and load time of every element. With this data you can tune and optimize your website, and track changes over time.

Jenkins X (JX) is an exciting new Continuous Integration and Continuous Deployment (CI/CD) tool for Kubernetes users. It hides the complexities of operating Kubernetes by giving developers a simpler experience to build and deploy their code. You can think of it as creating a serverless-like environment in Kubernetes. As a developer, you don’t need to worry about all the details of setting up environments, creating a CI/CD pipeline, or connecting GitHub to your CI pipeline. All of this and much more is handled by JX. In this article, we’ll introduce you to JX, show you how to use it, and how to monitor your builds and production deployments.

 

What is Jenkins X?

JX was created by James Strachan (creator of Groovy, Apache Camel, and now JX) and was first announced in March 2018. It’s designed from the ground up to be a cloud-native, Kubernetes-only application that not only supports CI/CD, but also makes working with Kubernetes as simple as possible. With one command you can create a Kubernetes cluster, install all the tools you’ll need to manage your application, create build and deployment pipelines, and deploy your application to various environments.

Jenkins is described as an “extensible automation server” that can be configured, via plugins, to be a Continuous Integration Server, a Continuous Deployment hub, or a tool to automate just about any software task. JX provides a specific configuration of Jenkins, meaning you don’t need to know which plugins are required to stand up a CI/CD pipeline. It also deploys numerous applications to Kubernetes to support building your docker container, storing the container in a docker registry, and deploying it to Kubernetes.

Jenkins pipeline builds are driven by adding a Jenkinsfile to your project. JX automates this for you. JX can create new projects (and the required Jenkinsfile) for you or import your existing project and create a Jenkinsfile if you don’t already have one. In short, you don’t need to know anything about Jenkins or Kubernetes to get started with JX. JX will do it all for you.

 

Overview of How JX Works

JX is designed to take all of the guesswork or trial and error approach many teams have used to create a fully functional CI/CD pipeline in Kubernetes. To make a tailored developer experience, JX had to choose which Kubernetes technologies to use. In many ways, JX is like a Linux distribution, but for Kubernetes. JX had to decide, from the plethora of tools available, which ones to use to create a smooth and seamless developer experience in Kubernetes.

To make the transition to Kubernetes simpler, the command line tool jx can drive most of your interactions with Kubernetes. This means you don’t need to know how to use kubectl right away; instead you can slowly adopt kubectl as you become more comfortable in Kubernetes. If you are an experienced Kubernetes user, you’ll use jx for interacting with JX (CI/CD, build logs, and so on) and continue to use kubectl for other tasks.

When you create or import a project using the jx command line tool, JX will detect your project type and create the appropriate Jenkinsfile for you (if it doesn’t already exist), define the required Kubernetes resources for your project (like Helm charts), add your project to GitHub and create the necessary webhooks for your application, build your application in Jenkins, and if all tests pass, deploy your application to a staging environment. You now have a fully integrated Kubernetes application with a CI/CD pipeline ready to go.

Your interaction with JX is driven by a few jx commands to set up and env, create or import an application, and monitor the state of your build pipelines. The developer workflow is covered in the next section. Generally speaking, once set up, you don’t need to interact with JX that much; it works quietly in the background, providing you CI and CD functionality.

 

Install Jenkins X

To get started using JX, install the jx binary. For Mac OS, you can use brew:

brew tap jenkins-x/jx brew install jx

Note: When I first tried to create a cluster using JX, it installed kops for me. However, the first time jx tried to use kops, it failed because kops wasn’t on my path. To address this, install kops as well:

brew install kops

Create a Kubernetes Cluster

JX supports most major cloud environments: Google GKE, Azure AKS, Amazon EKS, minikube, and many others. JX has a great video on installing JX on GKE. Here, I’m going to show you how to install JX in Amazon without EKS. Creating a Kubernetes cluster from scratch is very easy:

jx create cluster aws

Since I wasn’t using JX for a production application, I ran into a few gotchas during my install:

  1. When prompted with, “No existing ingress controller found in the kube-system namespace, shall we install one?” say yes.
  2. Assuming you are only trying out JX, when prompted with, “Would you like to register a wildcard DNS ALIAS to point at this ELB address?” say no.
  3. When prompted with, “Would you like wait and resolve this address to an IP address and use it for the domain?” say yes.
  4. When prompted with, “If you don’t have a wildcard DNS setup then set up a new CNAME and point it at: XX.XX.XX.XX.nip.io. Then, use the DNS domain in the next input” accept the default.

The image below shows you the EC2 instances that JX created for your Kubernetes Cluster (master is an m3.medium instance and the nodes are t2.medium instances):

LG IntroJenkinsX 1
AWS EC2 Instances. © 2018 Amazon Web Services, Inc. or its affiliates. All rights reserved.

When you are ready to remove the cluster you just created, you can use this command (JX currently does not provide a delete cluster command):

kops delete cluster

Here’s the full kops command to remove the cluster you just created (you’ll want to use the cluster name and S3 bucket for all kops commands):

kops delete cluster --name aws1.cluster.k8s.local \ --state=s3://kops-state-xxxxxx-ff41cdfa-ede6-11e8-xx6-acde480xxxx

To add Loggly integration to your Kubernetes cluster, you can follow the steps outlined here.

 

Create an Application

Now that JX up and running, you are ready to create an application. The quickest way to do this is with the JX quickstart. In addition to the quickstart applications that come with JX, you can also create your own.

To get started, run create quickstart, and pick the spring-boot-http-gradle quick start (see the screenshot below for more details):

jx create quickstart

 

LG IntroJenkinsX 2
Creating a Kubernetes cluster using jx create cluster © 2018 Jenkins Project

Note: During the install process, I did run into one issue. When prompted with, “Which organization do you want to use?” make sure you choose a GitHub Org and not your personal account. The first time I ran this, I tried my personal account (which has an org associated with it) and jx create quickstart failed. When I reran it, I chose my org ripcitysoftware and everything worked as expected.

Once your application has been created, it will automatically be deployed to the staging environment for you. One thing I really like about JX is how explicit everything is. There isn’t any confusion between temporary and permanent environments because the environment name is embedded into the application URL (http://spring-boot-http-gradle.jx-staging.xx.xx.xx.xx.nip.io/).

The Spring Boot quickstart application provides you with one rest endpoint:

LG IntroJenkinsX 3
Example Spring Boot HTTP © 2018 Google, Inc

 

Developer Workflow

JX has been designed to support a trunk-based development model promoted by DevOps leaders like Jez Humble and Gene Kim. JX is heavily influenced by the book Accelerate (you can find more here), and as such it provides an opinionated developer workflow approach. Trunk-based development means releases are built off of trunk (master in git). Research has shown that teams using trunk-based development are more productive than those using long-lived feature branches. Instead of long-lived feature branches, teams create branches that live only a few hours, making a few small changes.

Here’s a short overview of trunk-based development as supported by JX. To implement a code change or fix a bug, you create a branch in your project, write tests, and make code changes as needed. (These changes should only take a couple of hours to implement, which means your code change is small.) Push your branch to GitHub and open a Pull Request. Now JX will take over. The webhook installed by JX when it imported your project will trigger a CI build in Jenkins. If the CI build succeeds, Jenkins will notify GitHub the build was successful, and you can now merge your PR into master. Once the PR is merged, Jenkins will create a released version of your application (released from the trunk branch) and deploy it (CD) to your staging environment. When you are ready to promote your application from stage to production, you’ll use the jx promote command.

The development workflow is expected to be:

  1. In git, create a branch to work in. After you’ve made your code changes, commit them and then push your branch to your remote git repository.
  2. Open a Pull Request in your remote git repo. This will trigger a build in Jenkins. If the build is successful, JX will create a preview environment for your PR so you can review and test your changes. To trigger the promotion of your code from Development to Staging, merge your PR.
  3. By default, JX will automatically promote your code to Stage. To promote your code to Production, you’ll need to run this command manually: jx promote app-name --version x.y.z --env production

Monitoring Jenkins X

Monitoring the status of your builds gives you insight into how development is progressing. It will also help you keep track of how often you are deploying apps to various environments.

JX provides you multiple ways to track the status of a build. JX configures Jenkins to trigger a build when a PR is opened or updated. The first place to look for the status of your build is in GitHub itself. Here is a build in GitHub that resulted in a failure. You can clearly see the CI step has failed:

LG IntroJenkinsX 4
GitHub PR Review Web Page. © 2018 GitHub Inc. All rights reserved.

The next way to check on the status of your build is in Jenkins itself. You can navigate to Jenkins in your browser or, from GitHub, you can click the “Details” link to the right of “This commit cannot be built.” Here is the Jenkins UI. You will notice Jenkins isn’t very subtle when a build fails:

LG IntroJenkinsX 5
Jenkins Blue Ocean failed build web page. © 2018 Jenkins Project

A third way to track the status of your build is from the command line, using the jx get activity command:

LG IntroJenkinsX 6
iTerm – output from jx get activity command © 2018 Jenkins Project

If you want to see the low-level details of what Jenkins is logging, you’ll need to look at the container Jenkins is running in. Jenkins is running in Kubernetes like any other application. It’s deployed as a pod and can be found using the kubectl command:

$ kubectl get pods NAME                      READY     STATUS    RESTARTS   AGE jenkins-fc467c5f9-dlg2p   1/1       Running   0          2d

Now that you have the name of the Pod, you can access the log directly using this command:

$ kubectl logs -f jenkins-fc467c5f9-dlg2p

 

LG IntroJenkinsX 7
iTerm – output from kubectl logs command © 2018 Jenkins Project

Finally, if you’d like to get the build output log, the log that’s shown in the Jenkins UI, you can use the command below. This is the raw build log that Jenkins creates when it’s building your application. When you have a failed build, you can use this output to determine why the build failed. You’ll find your test failures here along with other errors like failures in pushing your artifacts to a registry. The output below is not logged to the container (and therefore not accessible by Loggly):

$ jx get build log ripcitysoftware/spring-boot-http-gradle/master view the log at: http://jenkins.jx.xx.xx.xxx.xxx.nip.io/job/ripcitysoftware/job/spring-boot-http-gradle/job/master/2/console tailing the log of ripcitysoftware/spring-boot-http-gradle/master #2 Push event to branch master Connecting to https://api.github.com using macInfinity/****** (API Token for accessing https://github.com Git service inside pipelines)

Monitoring in Loggly

One of the principles of a microservice architecture, as described by Sam Newman in Building Microservices, is being Highly Observable. Specifically, Sam suggests that you aggregate all your logs. A great tool for this is SolarWinds® Loggly. Loggly is designed to aggregate all of your logs into one central location. By centralizing your logs, you get a holistic view of your systems. Deployments can trigger a change in the application that can generate errors or lead to instability. When you’re troubleshooting a production issue, one of the first things you want to know is whether something changed. Being able to track the deployments in your logs will let you backtrack deployments that may have caused bugs.

To monitor deployments, we need to know what’s logged when a deployment succeeds or fails. This is the message Jenkins logs when a build has completed:

INFO: ripcitysoftware/spring-boot-http-gradle/master #6 completed: SUCCESS

From the above message, we get a few pieces of information: the name of the branch, which contains the Project name ripcitysoftware/spring-boot-http-gradle and the branch master, the build number #6, and finally the build status SUCCESS.

The metrics you should monitor are:

  • Build status – Whether a build was a success or failure
  • The project name – Which project is being built
  • The build number – Tracks PRs and releases

By tracking the build status, you can see how often builds are succeeding or failing. The project name and build number tell you how many PRs have been opened (look for “PR” in the project name) and how often a release is created (look for “master” in the name).

To track all of the above fields, create one Derived Field in Loggly called jxRelease. Each capture group (the text inside of the parentheses) defines a unique Derived Field in Loggly. Here is the regex you’ll need:

^INFO:(.*)\/.*(master|PR.*) #(.*\d) completed: ([A-Z]+$)$

Here’s the Jenkins build success log-message above as it appears in Loggly after we’ve created the Derived Field. You can see all the fields we are defining highlighted in yellow below the Rule editor:

LG IntroJenkinsX
Loggly – Derived Field editor web page.  © 2018 SolarWinds Worldwide, LLC. All rights reserved.

Please note that Derived Fields use past logs only in the designer tool. Loggly only adds new derived fields to new log messages. This means if you’ve got an hour of Jenkins output already sent to Loggly and you create the jxBuildXXX fields (as shown above), only new log messages will include this field.

In the image below, you can see all the Derived Fields that have been parsed in the last 30 minutes. For jxBuildBranchName, there has been one build to stage, and it was successful, as indicated by the value SUCCESS. We also see that nine (9) builds have been pushed to stage, as indicated by the jxBuildNumber field.

 

LG IntroJenkinsX 9
Loggly Search Results web page.  © 2018 SolarWinds Worldwide, LLC. All rights reserved.

Now that these fields are parsed out of the logs, we can filter on them using the Field Explorer. Above, you can see that we have filtered on the master branch. This shows us each time the master branch has changed. When we are troubleshooting a production bug, we can now see the exact time the code changed. If the bug started after a deployment, then the root cause could be the code change. This helps us narrow down the root cause of the problem faster.

We can also track when master branch builds fail and fire an alert to notify our team on Slack or email. Theoretically, this should never happen, assuming we are properly testing the code. However, there could have been an integration problem that we missed, or a failure in the infrastructure. Setting an alert will notify us of these problems so we can fix them quickly.

 

Conclusion

JX is an exciting addition to Jenkins and Kubernetes alike. JX fills a gap that has existed since the rise of Kubernetes: how to assemble the correct tools within Kubernetes to get a smooth and automated CI/CD experience. In addition, JX helps break down the barrier of entry into Kubernetes and Jenkins for CI/CD. JX itself gives you multiple tools and commands to navigate system logs and track build pipelines. Adding Loggly integration with your JX environment is very straightforward. You can easily track the status of your builds and monitor your apps progression from development to a preview environment, to a staging environment and finally to production. When there is a critical production issue that you are troubleshooting, you can look at the deployment time to see if changes in the code caused the issue.

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.