The public cloud now supports a vast variety of workloads as it provides a low-effort option to provision compute resources at scale. But as companies have learned over the past decade, cloud costs can easily spiral out of control without proper governance and continuous monitoring.
Gaining control over cloud costs remains an issue, in large part because gaining visibility of the cloud’s internal operations is challenging, at best. This leaves organizations to make rough estimates of how much they are actually consuming, until the bill arrives. According to a recent survey of IT, finance and operations professionals nearly half of respondents saw cloud costs jump at least 25% in 2020, with 20% reporting increases of 50% or more. At the same time, fewer than 20% said they had the ability to immediately detect spikes in cloud costs, while 25% said it can take days or even weeks to do so.
Drilling into Data
One particularly thorny issue is the difficulty involved in breaking down cloud usage by service or individual components, which prevents users from determining exactly why costs are rising or developing an effective strategy to reverse the trend. Usually, a cloud bill will contain thousands of services being used every month. Even when viewing overlapping time series data, which in theory should break down costs for each service, it is next to impossible to ascertain which series are trending up or down for anything more than 50 services at once.
This problem is at the heart of what companies need to know in order to manage their cloud budget, and is only magnified when organizations start spreading workloads over multiple clouds.
While there are many commercial, off-the-shelf solutions to track cloud costs, a lot of organizations find it helpful to craft their own customized approach. One of the most effective tools to accomplish this is the Mann-Kendall test, which has proven highly effective at separating the signal from the noise in environments generating multiple time series data feeds.
The Mann-Kendall (MK) test is an amalgam of the work of mathematician H.B Mann and statistician Maurice Kendall. It provides a non-parametric means of identifying trends in a series that, when applied to cloud management data, can be used to produce a trend analysis that is capable of accurately measuring costs and other metrics over time. This provides cloud consumers with deep insight into their usage patterns and how they are affected by various factors. This can then be analyzed more deeply to determine the most effective way to streamline costs without substantially hampering operations, and in some cases, even improving them.
The Right Approach
Implementing MK testing in a cloud management platform must be done correctly. For instance, the test delivers results that are similar to parametric linear regression analyses, but only if the residuals from a fitted regression are normally distributed. This, in turn, can determine whether the slope of the regression is different from zero.
In order for this to provide a meaningful analysis of time series data, the MK test must make a number of assumptions. First, without a measurable trend already identified, all measurements over time must be independent and identically distributed. As well, the measurements must represent the true states of the observables at the time they are measured and the methods used to collect samples must be unbiased.
Note that MK is not designed to account for seasonal (periodic) effects, so it is best to remove this data before the series is fed into the test process. It helps to apply a seasonal and trend decomposition tool that uses Loess regression to break the time series into its constituent components. This provides the ability to analyze inputs according to a range of desired parameters, such as the time series itself, data periodicity, seasonal width and model type to determine if data variations are proportional or do not trend with the time series.
Ultimately, this is used to determine if trends exist within the time series data and to calculate the slope of the regression line to determine how quickly the data is changing. All of this can be done in a highly granular fashion to give cloud administrators a fine-grained view of which workloads are spiking and how they can be reined in.
It should be noted that not all spikes in cloud consumption are a net-negative for a company. If workloads are on the rise because sales are up, that’s a good thing. If the rise is not producing a quantifiable benefit to the business model, however, it’s probably time to take action.
Luckily, by using Mann-Kendall, organizations have a quick, reliable way to get to the truth of the matter.
About the Author
Vadim Solovey is General Manager at DoiT International
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1