Using SolarWinds Observability for Performance Tuning
Managed cloud platforms have simplified the provisioning and management of computing resources and services. What was once a cumbersome process for engineers—as they submitted purchase orders and set up physical servers—is now almost instantaneously completed with just a few clicks.
The convenience of cloud platforms has brought considerable improvements to development velocity. However, it has also resulted in unpredictable or inflated cloud infrastructure costs, especially when organizations need clearer insight into their networks.
Observability solutions can provide deep insights into an organization’s infrastructure costs, providing alerts or alarms when they sense abnormal resource usage. In this post, we’ll explore the ins and outs of cloud pricing, considering the typical factors contributing to high cloud infrastructure costs. Finally, we’ll look at how to use SolarWinds® Observability to bring your cloud costs under control.
In its simplest form, cloud pricing is based on a pay-as-you-go model. Instances and services are priced according to their duration of use and do not require complex licensing or contracts. Most cloud providers support tiered pricing, with the cost per unit decreasing for higher tiers. Hyperscalers (large cloud service providers) like AWS, GCP, and Azure also provide savings for those willing to pay upfront and make long-term commitments.
In the sections below, we’ll cover the pricing of commonly used cloud services.
Virtual Machine (VM) instances can be used for any computing task. They are priced according to their computing ability, the OS, and the number of available virtual CPUs. For example, in the case of AWS, the least expensive VM option runs Ubuntu and comes with a single virtual CPU and 2 GB of memory. This instance is priced at $0.0255 per hour.
Most hyperscalers provide three pricing models for VM instances:
- On-demand: A pure pay-as-you-go model; has the highest pricing of the pricing models.
- Savings plan: Discounted pricing through an upfront commitment and payment.
- Spot instances: Allows customers to take advantage of unused VM instances if/when available; can lead to discounts of almost 90% compared to on-demand pricing.
For VM instances, cloud providers charge separately for data transfer and storage.
Typically, data transfer from the internet to instances does not incur costs. However, data transfer out of instances is charged depending on the volume, destination region, and services used. For example, GCP charges $0.01 per GB for outbound traffic to different zones within the same Google Cloud region.
Data storage costs vary based on a storage system’s read-and-write performance. For example, AWS EBS is available in various forms (such as General-Purpose Storage Optimized, I/O Optimized, and Throughput Optimized). The cheapest pricing plans start at $0.08 per GB per month.
Hyperscalers price the Kubernetes service at a fixed cost per cluster. For example, Google Kubernetes Engine costs $0.10 per cluster, per hour . This pricing is only for using the cluster service, while other components like VM instances, network, and storage are priced separately (as described above).
Hyberscalers provide various managed services, such as relational databases, serverless functions, and messaging queues. These services are priced on a pay-as-you-go model with different performance tiers. Most cloud providers have on-demand pricing as well as provisioned pricing models.
With its various service types, tiers, and pricing models, cloud platform pricing is undeniably complex. Any enterprise seeking a cloud infrastructure to properly balance cost and performance must perform a careful reading of pricing documentation and optimization strategies.
With the wide range of cloud compute instances and services available, it’s conceivable an enterprise with a sprawling set of cloud infrastructure needs may slowly see its costs spiral out of control. Let’s consider some of the main reasons behind this phenomenon.
Instance types play a key part in cloud infrastructure costs. Selecting the most optimal instance type helps reduce costs. Most hyperscalers include five types of instances:
- Accelerated computing
- General purpose
Each type of instance is designed for specific tasks and is priced differently. Choosing the wrong instance type for your task can lead to high costs and performance issues. For example, you may have a compute-heavy use case and choose (incorrectly) to use a storage-optimized instance type, like the lm4gn series. This may lead to higher costs and sub-optimal performance.
In addition to offering different instance types, hyperscalers also offer different instance sizes. While this is designed to provide organizations with options for optimizing their costs, the adverse result might be overprovisioning or underprovisioning resources.
For example, you might run an application that requires 4 GB of memory and 2 vCPUs, but you provision (and pay for) an instance with 16 GB of memory and 8 vCPUs. Therefore, you would be overpaying for unused resources.
On the other hand, you could provision an instance with 2 GB of memory and 1 vCPU to run the same application. In this case, the underprovisioning may lead to poor performance, poor user experience, and lost business revenue.
In large organizations, it’s common to see instances and services running without anybody accountable. This typically happens when projects are closed, or a team is disbanded. Organizations without proper project closure processes often sit on numerous such zombie resources.
Monitoring an application’s load pattern and optimizing resource usage to support that load go a long way in reducing costs. When configuring resource scaling, an organization must consider the peak load and the average load. For example, setting up the infrastructure to handle peak load levels in a varying load situation will result in instances sitting idle for a long time.
Using auto-scaling configurations can combat this problem. However, even when using auto-scaling configurations, developers must use optimal parameters to ensure that the number of instances does scale up or down too quickly.
In the previous section, we covered the three different pricing models typically offered by cloud providers. Although the on-demand model is the most convenient, it is also the most expensive option. If your load patterns are clearly known in advance, then you can use spot instances and savings plans with an upfront payment to reduce infrastructure costs.
Using the correct instance types, sizes, pricing models, and scaling configurations can help optimize your cloud infrastructure costs. However, the challenge in implementing cost optimization strategies is the lack of information regarding the real-time status of your infrastructure. An observability solution solves this problem by providing real-time dashboards to visualize infrastructure usage and receive alerts.
SolarWinds Observability comes built-in with many such dashboards. Let’s look at examples of typical metrics that expose issues with infrastructure. In our examples, we’ll look at SolarWinds Observability used in conjunction with AWS.
SolarWinds Observability provides details about instance types and sizes. These details can help organizations to define the optimal instance counts and sizes.
SolarWinds Observability displays the details of an account's on-demand and reserved instances. Moving from on-demand to reserved instances can help an organization reduce its costs.
SolarWinds Observability can help organizations understand the load on their API gateways and load balancers. It can provide details regarding CPU utilization, disk performance, network performance, and status check responses for individual instances.
For example, a high-capacity instance experiencing lower load levels should be a red flag. Load pattern details can help engineers to determine the appropriate instance size and count for the best performance of their applications without overprovisioning resources.
SolarWinds Observability also exposes metrics related to autoscaling, including:
Desired group capacity
- Maximum group capacity
- Pending instances
- Terminated instances
A bad scaling configuration can result in cost explosions due to unforeseen peak load patterns.
Integrating AWS with SolarWinds Observability is as easy as clicking the Add Data button at the top of your account dashboard and selecting AWS. Then, choose whether to use CloudFormation or manual setup. Enter a display name for the metrics to be ingested. Complete the setup process by selecting the AWS regions and services to monitor. On the AWS side, you only need to create an IAM policy and role to allow SolarWinds Observability access to your infrastructure metrics.
Managed cloud platforms and their pay-as-you-go pricing models offer great convenience when setting up infrastructure. However, infrastructure costs can quickly spiral out of control. Organizations must carefully define a setup that strikes the right balance between performance and costs to prevent this. To arrive at the most cost-effective infrastructure setup, they must use the correct instance types, instance sizes, scaling configurations, and pricing models.
Understanding which options are best for your situation depends on your visibility into the load patterns of your infrastructure. This is where observability can help. SolarWinds Observability enables you to optimize AWS costs by improving visibility into your cloud infrastructure. To find out more, sign up for a free trial of SolarWinds Observability or experience our interactive demo environment..
 Pricing in USD as of June 23, 2023