Web application performance is critical for overall business success. Poor performance can quickly cause your profits and reputation to suffer; if your online store isn’t supported by sufficient IT resources, an excessive number of users could slow or even crash your service and potentially cause you to lose customers and revenue.
If your current web app performance tuning strategy is focused on code-level concerns alone, you could be missing important context. Infrastructure issues can also impact end-user experience—especially with today’s dynamic IT environments, which may include new components like virtualized servers, container-based microservices, or now serverless infrastructure. It’s critical to understand how your IT strategy for today’s infrastructures that include servers, virtual hosts and containers could support—or kill—your web application performance.
What Are Typical Web Server Performance Issues?
You want to make sure your web server and other endpoints can help ensure sufficient web application performance. Essentially, a web application needs to be able to load and work quickly for its users, meaning the server must be able to respond quickly. The web app requests resources from the server using HTTP, and the server responds with the resource (or an error message). If all goes well, this call and response is so quick, it doesn’t impact performance.
If you don’t plan ahead for capacity, you may find too many simultaneous requests crash your server. Overloaded servers may drag down response time for end users. Poorly tune database queries may take too long to retrieve data. A lack of compute or memory resources will quickly kill performance, while even insufficient bandwidth on your network can quickly limit the number of application users.
That’s why you monitor server and infrastructure metrics. If key metrics, like error rates, approach an unacceptable threshold, you can respond quickly and prevent major performance issues. And that’s why it’s worthwhile to be familiar with common issues that can occur with more complex server and infrastructure configurations.
Why Virtual Servers Can Limit User Capacity
Virtual machines (VMs) serve as parallel, separate instances of underlying systems. When you virtualize web servers, you can reduce the number (and cost) of underutilized physical servers by housing several servers on the same hardware. This process typically offers you greater control over backups and configuration, but if not handled correctly, you could end up experiencing sudden performance drops for your web applications.
Experts don’t agree on whether virtualization necessarily impacts web application performance—tests have shown both positive and negative impacts. But a drop in performance is much more likely if you’re not careful about the transition. Virtualization changes how load is distributed across servers, which potentially changes the user capacity threshold. That’s more likely to occur if you’re using inappropriate virtualization tools—like a free, evaluation-only hypervisor not meant for production use. You also must ensure the underlying hardware is truly powerful enough for your purposes, especially CPU and memory.
To understand the impact on your end users, you should be tracking processes like user querying, page rendering, and reading and writing (disk I/O). The primary question you want to ask is: how many simultaneous users can the virtualized servers handle? You can put a number on this by measuring average page duration and error rate. If you’re watching these metrics, you should have a good sense of whether your virtualized servers need tuning or adjustments.
How Microservices Can Lead to High Latency
Microservices are a new way of structuring complex applications, moving away from a “monolith” model to a collection of more flexible, scalable services. The individual modules communicate with each other through language-agnostic APIs. A traditional application is more difficult to update and scale, more vulnerable to crashing, and requires horizontal scaling, which strains your servers because each copy requires the same amount of underlying resources. But microservices utilize containers, or virtual operating system environments, that allow for isolated development and rely on dedicated access to separate hardware resources.
Microservices are becoming increasingly popular, especially since they’re resistant to failure; websites built on microservices can better weather sudden spikes in traffic (like retail websites during the holiday sales season). But even microservices can cause performance problems. These are complex, distributed systems, but requests must travel as quickly as possible between modules. Remote calls can experience high latency, especially if they aren’t using best-performing routes or if API endpoints have to constantly redirect clients. And because individual modules can be developed and updated in isolation, the overall application may be uneven, making it more difficult for the parts to coordinate. While the risks of failure are lessened, the user experience may be slowed considerably.
To prevent high latency, organizations must be able to provide sufficient monitoring across the many moving parts of the application, which can be a challenge if you’re using a variety of languages and platforms. Organizations must ensure fast provisioning so that newly added or updated containers have the dedicated server resources they require. It’s also critical to consolidate services within a single data center to help ensure calls stay local.
How Serverless Environments Impact Visibility
Serverless environments use a cloud computing model, meaning a third party is responsible for managing all server allocation and provisioning. Your applications thus depend on cloud services to operate (this is often referred to as “backend as a service” or BaaS). Using the cloud can help a company save on costs and resources, and adds flexibility for scaling and failover.
On the other hand, outsourcing server functions and management to a third party means you may have more limited visibility. Code and configurations are the aspects most under your control, as the cloud provider is responsible for the infrastructure. For serverless environments, it’s critical for admins to be able to detect risks within their architecture early into a migration. The best way to do so is with tools that provide end-to-end instrumentation and code tracing. It can be harder to replicate serverless environments, meaning testing and debugging after migration can be trickier.
You should be prepared for common issues that arise in serverless environments. For instance, you may experience multi-tenancy, where several customers are assigned to a single server—potentially slowing down application performance, although you may not be aware this is the root cause. Ideally, going serverless through a third party like Amazon Lambda, Google Cloud, or Microsoft Azure means the application code can run from anywhere—including edge locations—potentially speeding load time. But latency can actually grow due to “cold starts,” where the server has to spin up before it can run. Make sure you use monitoring tools that boost your visibility into the third-party service, so you don’t waste your time fixing code if the problem lies with the server.
Slow Database Queries Lead to Slow Performance
If your web apps are experiencing poor performance, the problem might not be your server or services—it may be your database is causing slowdowns. In order to run as intended, applications need to be able to pull data from databases in an efficient way, as slow queries take longer to ‘find’ and return information, quickly causing web application performance to slow to unacceptable levels. That’s why query tuning and optimization should be near the top of your list for ensuring sufficient performance levels. If you’re experiencing high latency or low capacity, you should investigate the dozens of well-known query tuning methods and best practices for reducing replication, bad indexing, contention, high CPU usage, and other common problems.
Observability Provides Context
While most operations and development teams have different monitoring tools for the varied components supporting web applications, they lack a holistic view that brings this data together to support data-driven decisions and to enable efficient web application performance optimization.
SolarWinds® Observability goes beyond monitoring. It collects, connects, and provides context around the performance data across the many underlying components supporting web applications. SolarWinds Observability links the performance data from web applications, their services, infrastructure, database, networks, and the end-user experience to deliver holistic visibility, health, and performance status at the application level.
If you’d like to see observability in action, Take a few minutes to explore the SolarWinds® Observability interactive demo. It’s a hands-on demo environment populated with real-time data from an online application so you can see how the different monitoring data combine to provide holistic visibility. Give it a try and let me know what you think below.