As we mentioned before, this is the single most important step for indexing custom logs. Technically these fixes are optional, but doing so allows you to get the most value from your unstructured logs. The log messages themselves are still unstructured and could contain anything — logs of text, numbers, expectations, errors, you name it.
If you expand and review one of your log events in Discover, you may notice that things aren’t as unstructured as you might have thought.
To give an example, the host used in the example was running in GCP. Without any additional configuration, we now get a wide range of metadata for cloud provider specific fields, in addition to metadata on the host level. This cloud provider metadata is really powerful when doing log analyses later.
This metadata allows you to filter or group log data by a set of hosts or a specific instance. The cost and overhead for this important metadata is also low. Even though lots of fields can be added in addition to the timestamp and message we started with, these extra data fields can be compressed incredibly efficiently, since they always contain the same information for a particular source.
What can we do with unstructured log data like this?
Search it! Elasticsearch is really powerful and fast when it comes to searching through terabytes of data. Since it indexes the content of every document by default, searching through it all becomes fast and easy.
One common approach for troubleshooting applications or servers when using Kibana is to simply open up the Discover UI and start searching. Start by excluding all of the logs you’re not interested in, either by removing them based on a field value or by using a full test search using Kibana Query Language (KQL). You can find out more about the Kibana Query Language here, but let’s look at some examples:
Leave a Reply