As more organizations leverage the Amazon Web Services (AWS) cloud platform and services to drive operational efficiency and bring products to market, managing logs becomes a critical component of maintaining visibility and safeguarding multi-account AWS environments. Traditionally, logs are stored in Amazon Simple Storage Service (Amazon S3) and then shipped to an external monitoring and analysis solution for further processing.
To simplify this process and reduce management overhead, AWS users can now leverage the new Amazon Kinesis Firehose Delivery Stream to ingest logs into Elastic Cloud in AWS in real time and view them in the Elastic Stack alongside other logs for centralized analytics. This eliminates the necessity for time-consuming and expensive procedures such as VM provisioning or data shipper operations.
Elastic Observability unifies logs, metrics, and application performance monitoring (APM) traces for a full contextual view across your hybrid AWS environments alongside their on-premises data sets. Elastic Observability enables you to track and monitor performance across a broad range of AWS services, including AWS Lambda, Amazon Elastic Compute Cloud (EC2), Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes Service (EKS), Amazon Simple Storage Service (S3), Amazon Cloudtrail, Amazon Network Firewall, and more.
In this blog, we will walk you through how to use the Amazon Kinesis Data Firehose integration — Elastic is listed in the Amazon Kinesis Firehose drop-down list — to simplify your architecture and send logs to Elastic, so you can monitor and safeguard your multi-account AWS environments.
Announcing the Kinesis Firehose method
Elastic currently provides both agent-based and serverless mechanisms, and we are pleased to announce the addition of the Kinesis Firehose method. This new method enables customers to directly ingest logs from AWS into Elastic, supplementing our existing options.
- Elastic Agent pulls metrics and logs from CloudWatch and S3 where logs are generally pushed from a service (for example, EC2, ELB, WAF, Route53) and ingests them into Elastic Cloud.
- Elastic’s Serverless Forwarder (runs Lambda and available in AWS SAR) sends logs from Kinesis Data Stream, Amazon S3, and AWS Cloudwatch log groups into Elastic. To learn more about this topic, please see this blog post.
- Amazon Kinesis Firehose directly ingests logs from AWS into Elastic (specifically, if you are running the Elastic Cloud on AWS).
In this blog, we will cover the last option since we have recently released the Amazon Kinesis Data Firehose integration. Specifically, we’ll review:
- A general overview of the Amazon Kinesis Data Firehose integration and how it works with AWS
- Step-by-step instructions to set up the Amazon Kinesis Data Firehose integration on AWS and on Elastic Cloud
By the end of this blog, you’ll be equipped with the knowledge and tools to simplify your AWS log management with Elastic Observability and Amazon Kinesis Data Firehose.
Prerequisites and configurations
If you intend to follow the steps outlined in this blog post, there are a few prerequisites and configurations that you should have in place beforehand.
- You will need an account on Elastic Cloud and a deployed stack on AWS. Instructions for deploying a stack on AWS can be found here. This is necessary for AWS Firehose Log ingestion.
- You will also need an AWS account with the necessary permissions to pull data from AWS. Details on the required permissions can be found in our documentation.
- Finally, be sure to turn on VPC Flow Logs for the VPC where your application is deployed and send them to AWS Firehose.
Elastic’s Amazon Kinesis Data Firehose integration
Elastic has collaborated with AWS to offer a seamless integration of Amazon Kinesis Data Firehose with Elastic, enabling direct ingestion of data from Amazon Kinesis Data Firehose into Elastic without the need for Agents or Beats. All you need to do is configure the Amazon Kinesis Data Firehose delivery stream to send its data to Elastic’s endpoint. In this configuration, we will demonstrate how to ingest VPC Flow logs and Firewall logs into Elastic. You can follow a similar process to ingest other logs from your AWS environment into Elastic.
There are three distinct configurations available for ingesting VPC Flow and Network firewall logs into Elastic. One configuration involves sending logs through CloudWatch, and another uses S3 and Kinesis Firehose; each has its own unique setup. With Cloudwatch and S3 you can store and forward but with Kinesis Firehose you will have to ingest immediately. However, in this blog post, we will focus on this new configuration that involves sending VPC Flow logs and Network Firewall logs directly to Elastic.
Leave a Reply