Datadog, New Relic, and Splunk have pioneered observability from the 2010s through the early 2020s.
Their premium status, and associated costs, are rationalized by their huge feature sets and product suite. In the last 15 years, well-funded startups to blue chip companies could justify paying a premium price for observability. After all, observability was, and remains, a business-critical function.
While the importance of observability is greater than ever, so is the need to cut costs. Observability devours budgets – and it will only get more expensive as cloud workloads expand and generate larger data volumes.
Getting locked into gigantic contracts with premium vendors is no longer viable for many organizations, so Logz.io designed an entirely new approach for large-scale telemetry data processing, storage and analytics to significantly reduce the total cost of ownership for observability.
Table of Contents
Logz.io: Full Observability at Half the Cost of Datadog, New Relic and Splunk
Like the Observability Juggernauts, Logz.io unifies log, metric and trace analytics on a single platform to provide full observability into modern cloud infrastructure and applications.
Unlike the Juggernauts, Logz.io is designed from the bottom up to reduce the cost of observability by reducing the computing footprint and engineering costs to deliver an observability product.
Let’s dive deeper into how Logz.io achieves cost efficiency compared to Datadog, New Relic and Splunk.
Full control over telemetry data volumes and costs
There is a direct and causal link between expanding cloud workloads, growing telemetry data volumes and rising observability costs.
While Datadog, New Relic and Splunk benefit immensely from cloud and data growth, much of the data ingested into these platforms is never actually used.
Not coincidentally, the Juggernauts make it unintuitive to filter out the unneeded telemetry data. As a result, costs can skyrocket even if your team only gets real value from a quarter of the data held against your bill.
Logz.io takes the opposite approach by proactively encouraging users to identify and remove data that isn’t valuable to reduce costs.
By creating an inventory of all incoming telemetry data, Logz.io makes it easy to isolate the junk from the critical information. In the same UI, users can add filters to remove the useless data – immediately reducing excess data volumes and costs.
We call this feature the Data Optimization Hub, and it’s one-of-a-kind.
Logz.io’s open source-based product strategy
Logz.io’s product strategy is to build on top of the most popular open source observability technologies in the world, including OpenSearch, Prometheus, OpenTelemetry, and Jaeger. In a recent survey of over 500 engineers, Logz.io found that 93% of all engineering teams used these technologies in some capacity.
Logz.io enhances these technologies with additional features that simplify observability, accelerate MTTR, reduce costs, and ensure reliability and performance at any scale.
This strategy also drastically reduces the amount of required engineering resources, since the core observability functions are provided through the open source projects – and these savings are passed to the customer. Plus, teams save valuable engineering resources using the tools they already know, rather than learning something new.
Logz.io benefits from, and contributes back to, the open source community, rather than building the bulk of their platform from scratch.
To summarize, Logz.io’s open source-based product strategy reduces costs in two key ways:
- Logz.io leverages open source wherever possible to minimize engineering costs, which translates to lower prices for the customer.
- Logz.io provides core observability capabilities without the abundance of advanced (and sometimes, scarcely used) data analysis capabilities that Datadog, New Relic and Splunk use to justify their soaring costs.
While Logz.io does not have the hundreds of features and tens of products that the Juggernauts have, this doesn’t entail less effective observability. Learn about Logz.io’s more-than-capable monitoring and troubleshooting capabilities.
Dramatically reduce log storage costs with a minimal impact on search performance
As cloud workloads grow alongside telemetry data volumes, indexing log data can become overwhelmingly expensive.
To reduce log indexing costs, a common strategy is to archive the bulk of log data in low cost cloud storage like AWS S3. When the data is needed, it can be pulled from S3 into the observability platform so it can be indexed in hot storage, which enables fast log queries.
This strategy – known as a typical hot-cold storage model – reduces the long term storage costs of log data, but pulling log data from S3 into an observability platform can be cumbersome when you need the log data and insights quickly.
Logz.io breaks this paradigm with Cold Search (GA in July 2023), which can query log data stored in Logz.io-hosted cold storage in near-real time – providing the costs of cold storage and the search performance of hot storage.
Ever since centralized logging was spearheaded by Splunk in the 2000s, there has been a tradeoff between costs and search performance. With Logz.io, you get both.
Future-proof your observability strategy
The Juggernauts use proprietary technology for data collection, alerting, and visualization, making it particularly difficult to migrate away from their stack if it gets too expensive – a classic case of vendor lock-in.
Keep this in mind if you’re expecting to add more customers or to build more cloud services in the next year, which means observability costs can only go up.
Alternatively, Logz.io was designed to minimize observability costs without jeopardizing the quality of observability. These three elements of Logz.io’s platform below can save users 50% of observability costs compared to Datadog, New Relic and Splunk:
- Centralized data inventory and filtering: Users get a single UI to inventory all incoming data, easily identify useless data, and add filters to toss out the junk – which usually reduces data volumes by 30-50%.
- Open source foundation: By building our features on top of the most popular open source observability technologies, Logz.io doesn’t need to invest as much in engineering – these savings are passed onto the customer.
- Low-compute data storage: Logz.io Cold Search breaks the persistent log management paradigm that says you can’t have hot search performance and cost storage costs at the same time. With Logz.io, users get both by querying log data directly from S3 in near real-time.
Unlike Datadog, New Relic, and Splunk, Logz.io’s open source foundation also makes it easy to migrate off our platform if you don’t like it. Therefore, Logz.io is the low-cost and low-risk alternative to the Juggernauts.
Learn more here about Logz.io’s cost efficiency features.
Leave a Reply