Ready to get started? Elastic 8.0 is available now on Elastic Cloud — the only hosted Elasticsearch offering to include all of the new features in this latest release.
Speed, scale, and relevance: new beginnings, same foundation
With every ending there comes a new beginning. And, as we all start in on a new year (see ya 2021, hello 2022) — we’re also starting in on a new era of speed, scale, and relevance with Elastic 8.0.
Our customers and community know that our commitment to speed, scale, and relevance is unwavering. With each and every Elastic release there are enhancements and optimizations to ensure that Elasticsearch is the fastest, most scalable, and most capable search engine available.
In fact, over the last three years, we have made great strides to: reduce memory usage (allowing more data to be managed per node), reduce query overhead (especially impactful with large deployments), and introduce some totally new features to enhance relevance.
For example, with the 7.x stream of releases, we increased the speed of date histograms and search aggregations, enhanced the performance of page caching, and created a new “pre-filter” search phase. In addition, we reduced resource requirements (read: lowered our customers’ total cost of ownership) via memory heap reductions, full support for the ARM architecture, by introducing novel ways to use less storage, and enabling our customers to easily decouple compute from storage with a new frozen tier and searchable snapshots.
Perhaps the best part about an endless stream of Elastic Stack optimizations is that however you choose to put your data to work, these enhancements inherently help you to search, solve, and succeed with speed and at scale — no legwork required.
Improve search relevance with native vector search
Elastic 8.0 brings a full suite of native vector search capabilities that empower customers and employees to search and receive highly relevant results using their own words and language.
Over the last two years, we’ve been working to make Elasticsearch a great place to do vector search. Way back, with the release of Elasticsearch 7.0, we introduced field types for high-dimensional vectors. With Elasticsearch 7.3 and Elasticsearch 7.4 we introduced support for vector similarity functions. These early releases demonstrated the promise of bringing vector search techniques to the Elasticsearch ecosystem. We’ve been thrilled to see our customers and community eagerly adopt them for a wide range of use cases.
Today, with Elasticsearch 8.0, we’re making vector search even more practical to implement by bringing native support for natural language processing (NLP) models directly into Elasticsearch. In addition, Elasticsearch 8.0 includes native support for approximate nearest neighbor (ANN) search — making it possible to compare vector-based queries with a vector-based document corpus with speed and at scale.
Open up a new world of analysis with the power of NLP
Elasticsearch has always been a good place to do NLP, but historically it required doing some of the processing outside of Elasticsearch, or writing some pretty sophisticated plugins. With 8.0, users are now able to perform named entity recognition, sentiment analysis, text classification, and more directly in Elasticsearch — without requiring additional components or coding. Not only is calculating and creating vectors natively within Elasticsearch a “win” in terms of horizontal scalability (by distributing computations across a cluster of servers) — this change also saves Elasticsearch users significant amounts of time and effort.
Leave a Reply