What is Modern Infrastructure?
Understanding modern software applications isn’t just a question of what; it’s also a question of why. Why do we choose to use a particular technology? How does that technology serve the overall business needs? And when you have a problem, how do you figure out what’s wrong? If you’re in the position of trying to understand a modern software application for the first time, these questions can seem unanswerable. Good news: every person you work with, and everyone you look up to, has had the same issues. When you understand the kinds of problems modern infrastructure is designed to solve, its pieces make a lot more sense. In this post, we’ll talk about what makes up a modern software application and tools that make it easy to understand your application’s performance.
What Does Modern Infrastructure Include?
Before we start, an important caveat: this is a high-level view of infrastructure. I might talk about things in this post that don’t make sense for your application. Additionally, there might be parts of your software stack I don’t touch on. Every application is unique! While many pieces of modern infrastructure include similar elements, no single part of the puzzle is used everywhere. Not even something as common as a database is the right fit for every application. This post is a guide, not a universal truth.
Clients
When your users think about your application, they’re thinking about the client. The client is the interface through which your users interact with your application. For some applications, the client is the entirety of the application. Often, we think of clients as being a graphical user interface (GUI), but it’s not the only form they take. If your application sends data to and from a server, you might have a GUI. You might have a mobile client running on Android or iOS. You might have a command line client that makes HTTP requests, like cURL. The “why” of a client is simple: without it, nobody can use your product.
The ongoing trend among enterprises is the migration of systems to cloud-hosted platforms. Nearly 90% of respondents to a survey of CIOs stated that their company had some plan or strategy in place to migrate from on-premises deployments to the cloud. In addition to migrations from on-premises to the cloud, let's not forget enterprises also migrate systems from one cloud platform to another.
Modern applications are complex regardless of the migration path and involve multiple layers. Traditional monitoring methods rely on teams analyzing logs and metrics. However, modern applications' highly distributed and ephemeral nature makes this almost impossible. Engineers would need to collect logs and metrics across architectural components that can number in the hundreds.
Enterprises embarking on an application migration need more advanced methods to fully understand the success and health of their applications—before and after the migration.
Application Server
This is the beating heart of most software applications. It’s where all the logic lives for whatever the application does. If you’re a software developer, this and the client are the parts of the code you touch most often. The application server accepts requests from your client. It processes those requests, then responds. When we talk about application servers, we’re talking about everything from authentication and authorization to validation and data persistence. You’ll often hear this part of the application called the “back end” in conjunction with the database. The “why” of an application server is also simple. Any time more than one client needs to coordinate with another, you’ll need an application server involved.
Data Persistence
Usually, your users don’t want to do something once and never touch it again. They want to do some work, then rely on you to save their work, so they can come back to it later. Often, but not always, you’ll need a database to do this. Most applications—whether they run on an application server or merely through a local client—use some type of database to organize application data. The “why” for databases is also straightforward: they provide a structured way to both persist and retrieve data. Databases use a special programming language, the Structured Query Language. Importantly, database interaction often takes a significant portion of application processing time. When we get to application monitoring, we’ll talk about how to understand your database’s performance.
Data persistence doesn’t just mean databases, though. Sometimes, it also means long-term file storage. Take Imgur, an image-hosting website, for instance. Imgur needs a database to hold information about its images, plus file storage for the images themselves. Your site might need a similar setup. Databases aren’t good at storing and retrieving larger files, so putting images into the database is a bad idea. The logic for storing and retrieving data, both from the database and other types of storage, is found in the application server.
Load Balancing
Here’s where things start to get a little more complicated. Your client takes input from a user and makes requests to the application server, which processes the request. Maybe it interfaces with a data persistence layer to save or retrieve some data. Finally, it sends a response back to the client, which displays the result of the request to the user. If you have one user or a hundred users, that’s probably enough for your application. But what if you have a million? Ten million? Suddenly, your application server is straining under the load from all those concurrent users. This is where a load balancer comes into play. Put simply, a load balancer is a small server that sits in front of your application servers. Yes, in this case, you need more than one application server. While we’ve come a long way in multi-core and multi-thread CPU architecture, application servers can still only handle a few dozen to a few hundred requests at the exact same time. Modern infrastructure solves this problem by running multiple copies of the application server at once. The load balancer does what it says: it balances the load between those servers. The load balancer knows how many requests each server is handling at any time. It generally distributes new requests to the least loaded servers, keeping them from overloading.
Infrastructure Orchestration
Astute readers will notice our application infrastructure has gained a lot of complexity. We’ve gone from a simple client making a request to a simple server reading from a simple database to multiple servers interfacing with multiple clients in parallel. What’s more, for applications serving many customers, you need to keep those applications running all the time. This is where orchestration software like Ansible or Chef comes into play. These programs define infrastructure through code, then integrate with infrastructure APIs and physical hardware to materialize that infrastructure.
These software stacks define your entire infrastructure and make sure hardware meets all your needs. Instead of manually provisioning three application servers, you can write code to tell an orchestration service you need enough processing power to handle the current load. If the load increases, your orchestration software provisions a new application server and automatically connects it to the load balancer and the database. When the load decreases, the orchestration service deprovisions hardware, so you don’t pay for more CPU power than needed.
How to Monitor Modern Infrastructure
When you’re scaling a big piece of software, you need to know how all the pieces fit together. Not just conceptually, but moment to moment. Slow responses and application failures lead to frustrated customers. It’s critical for a team to know about infrastructure issues as soon as possible. Each piece of infrastructure introduced above has its own logging system, but it’s not enough. A log entry from a load balancer corresponds to an action in the client and a response from the server. You can’t see the whole picture with just one piece of the puzzle. You need something to give you a view of your entire application’s infrastructure at once, so you can quickly identify problems. An observability solution like SolarWinds Observability gives you such visibility.
We mentioned earlier that database performance can be a thorny situation. Many applications devote most of their resources to their database, which means poor database management can lead to poor app performance. Luckily, SolarWinds Observability excels at providing easy integration with many database systems. Extending visibility to include your applications with SolarWinds Observability lets you see how parts of your infrastructure affect your overall response times. These connections give you visibility into how your customers experience your application in real time.
Modern Infrastructure Is Complicated, but You Can Manage It
Modern infrastructure might still seem overwhelming. That’s OK! It’s a big topic. The key is to remember why architects choose each of these tools. They serve a unique purpose, and none is chosen without care. As you develop your infrastructure and scale your application, remember you need insight into how your application performs at every level. Implementing full stack observability will provide the unified visibility you need to keep your application humming and your customers happy.