As an observability leader, at Logz.io, we pride ourselves on continuous innovation. That’s why, last year, we released our AI agents to revolutionize observability by helping businesses, and their engineering and DevOps teams, automate data analysis and root cause analysis.
The primary way in which engineering and DevOps teams interact with the agents is by asking performance, troubleshooting, and optimization-related questions. The agents then mine stored observability data and provide the most relevant answers.
However, answering these questions isn’t as simple as it sounds.
To provide accurate and relevant responses the system must process and analyze massive volumes of noisy observability data (such as logs, traces, and other telemetry data).
That’s where our advanced data compression techniques come into play.
Why Use Compression?
Observability data is inherently voluminous, often consisting of massive datasets. To answer questions in real-time – while working within LLM’s context-window or token constraints – you could use methods like Retrieval-Augmented Generation (RAG) to fetch smaller subsets of data. But even the smallest subset of observability data will be too large to process by LLMs using RAG. So we took one step further.
As part of that approach, we built a telemetry compression layer. This means that when a user asks a question, the AI agent uses that layer instead of the raw data for reasoning purposes and access the raw data only when additional fields are needed.. The benefit of this layer is that it allows the LLM to analyze tons of data which would be impossible otherwise.
Data Clustering and patterns
Logz.io has a long history of researching compression techniques, building upon our patented work (US11928144B2) on clustering log messages. We’ve been using pattern recognition to enhance real-time log analysis by identifying recurring structures, improving anomaly detection, categorization, and correlation. These techniques help us today optimize monitoring, security, and provide operational insights.
Here’s How Compression Works
1.
Data is ingested and continuously analyzed and compressed in the background
As observability data is ingested, our system continuously analyzes, compresses, and distributes it across the various compression data stores. By recognizing both predefined and dynamic patterns, we are able to compress the data, effectively reduce noise, throw out the unnecessary data and store it in multiple structures, optimized for intent-based retrieval. This ensures relevant and fast query responses.
2.
The right compression technique is applied, based on the user’s intent
We apply multiple compression techniques each dedicated to answer a specific user’s query. Not all data is compressed in the same way. Our system dynamically adapts, ensuring the most relevant data is compressed and retrieved for each specific question. For instance, if a user asks about top errors in the past 24 hours, the system focuses on delivering the most pertinent compressed data set that contains error-related observability data only.
3.
On-demand, customized data delivery
When a user queries the system, the AI agent leverages pre-compressed data tailored to the specific intent of the query. This ensures faster answers with minimal delay, providing a superior user experience.
These advanced techniques enable our AI agents to efficiently handle complex queries, delivering high-quality, accurate, and actionable answers.
Logz.io’s Continuous Innovation
But our work doesn’t end here. As we refine our AI agents, we continue to innovate and enhance our compression methods to ensure they’re always prepared to deliver quality responses. On top of that we are building our Semantic layer and our Adaptive Learning Framework. You will learn about this in future posts.
We believe that our investment in proprietary AI agent technology and additional advanced utilities gives our customers a competitive edge in troubleshooting, performance optimization, and system monitoring. In turn, this lets them take full control of their data, make better decisions, and resolve issues before they affect the business.
By combining AI agents, data compression, and other advanced utilities – and adding them to Logz.io’s proven observability capabilities – we help both our customers, and our partners, stay ahead of performance issues, optimize their systems, and achieve higher uptime. Stay tuned for more updates as we keep pushing the limits of observability and AI-powered data analysis.
Leave a Reply