The release of the Elastic Stack 8.0 introduced the ability to upload PyTorch machine learning models into Elasticsearch to provide modern natural language processing (NLP) in the Elastic Stack. NLP opens up opportunities to extract information, classify text, and provide better search relevance through dense vectors and approximate nearest neighbor search.
In this multi-part blog series, we will walk through end-to-end examples using a variety of PyTorch NLP models.
Part 1: Getting started with NLP models
Part 2: Named entity recognition (NER)
Part 3: Sentiment analysis
In each example we will use a prebuilt NLP model from the Hugging Face model hub. Then we will follow the Elastic documented instructions for deploying an NLP model and adding NLP inference to an ingest pipeline. Because it’s always a good idea to start with a defined use case and an understanding of the text data to process in the model, we’ll start by defining the objective for using NLP and a shared data set for anyone to try out.
To prepare for the NLP example, we will need an Elasticsearch cluster running at least version 8.0, an ML node with at least 2GB RAM, and for the named entity recognition (NER) example we’ll use the required mapper-annotated-text plugin. One of the easiest ways to get started is by following along with these NLP examples with your own free 14-day trial cluster on Elastic Cloud. Cloud trials are able to scale to a max of two 2GB ML nodes, which will allow you to deploy one or two of examples at any one time in this multi-part blog series.
Leave a Reply