Blog - Cloud ComputingCommentary from our cloud experts around the globeCommentary from our cloud experts around the globePUTINGCOMCLOUDBLOG

Flow logs capture information about IP traffic going to and from network interfaces in virtual private cloud (VPC). They’re used to troubleshoot connectivity and security issues, and make sure network access and security group rules are working as expected. InfoSec and security teams also use VPC flow logs for anomaly and traffic analysis.

Thanks to recent native Kinesis integration with Splunk, it’s become easy to stream data to Splunk and extract valuable insights. For customers with multiple accounts, it is more efficient to do log analysis with centralized data and dashboards.

In AWS, custom analysis of streaming data from multiple accounts can be done by collecting federated logs for central processing. Custom apps are built using streaming data which is assembled across the accounts and delivered using CloudWatch Logs Destination, Subscriptions and Kinesis.

Kinesis Data Firehose is a fully managed, reliable and scalable solution for delivering real-time streaming data to destinations S3, Redshift, Elasticsearch Service and Splunk. It can also be configured to transform data before that data is delivered. As a platform-as-a-service solution, it provides significant cost savings.

Splunk captures and indexes data in real time and uses it to generate visualizations. There are two apps “Splunk Add-on for AWS” and “Splunk App for AWS” with built-in searches, macros dashboards and panels for VPC Traffic Analysis and VPC Security Analysis, in addition to other AWS-related visualizations.

Cross-account data sharing

Using CloudWatch Logs Destination, data can be sent from multiple sender accounts to a single receiving account. In AWS organizations, it can also be used to push down policies and control through the organizational structure. For data to be shared, both sender and recipient details are needed.

So how to get started? First, set up a log destination (Kinesis Firehose) in the data recipient account. The receiver account shares log destination information with the sender account. Access to the sender accounts needs to be granted using IAM policies.

Our previous blog explained how to ingest flow logs into Splunk from an AWS account using Firehose. This time we look at how to ingest data from multiple accounts.

Solution overview

Set out below is the architecture and dataflow for VPC flow logs from multiple accounts into Kinesis Firehose, the central logging account, and from there into Splunk. This blog post will discuss the following steps:

Set up Splunk HTTP Event Collector.

Create Log Destinations and Kinesis Firehose in the receiving logging account and set up permissions for sender accounts to stream data.

Set up Splunk HEC as the destination for Kinesis Firehose in the central logging account.

Set up CloudWatch subscription filters on sender accounts with the receiving account as destination.

Step-by-step process

Set up one account as receiving account (222222222222) and one as sending account (111111111111).

Create HTTP Event Collector in Splunk and then set up Kinesis Firehose and Logs Destination in receiving account 222222222222. Below is the data flow for flow logs.

Create a Splunk HTTP Event Collector

First create HEC in Splunk. From Splunk Web, go to Data Inputs, HTTP Event Collector and add a new token for receiving data over HTTP. Check Enable indexer acknowledgement while creating token.

Create Lambda function for log processing (receiving account)
Go to Lambda, create function and select blueprint. We are going to search for kinesis-firehose-cloudwatch and select the kinesis-firehose-cloudwatch-logs-processor.
Click on configure and name the function lambda-cw-transform, choose the Lambda service we created above (lambda-firehose-basic-role).

Create Firehose Stream with Splunk as destination (receiving account)
Go to the AWS Console, select Kinesis and Data Firehose and create new data firehose stream. Enter the name and Direct PUT for source.

Enable record transformation and select the Lambda function we created before.

Select Splunk as destination.

Input the Splunk HEC URL and token which we created initially.

Select S3 backup bucket. We will only select to log failed events.

Select defaults for S3 buffer conditions, compression and encryption, error logging and for IAM choose to create a new Firehose role. Review and create delivery stream.

It’s time to verify that events are being sent to Splunk. In just a few seconds, data is sent to Kinesis firehose and finally on to Splunk. If you search the index, you should be able to see the flow log data.

Go to the Splunk App, then to Insights, and check VPC Traffic Analysis and Security Analysis. These are predefined dashboards, searches and panels, all of them part of Splunk App for AWS.

Summary

AWS Services Kinesis Firehose, Log Destinations and Subscriptions filters make it easy to aggregate and stream data to destinations like Elastic Search, Redshift, Splunk and analyze it in real time. They can also be used to troubleshoot connectivity and security issues, and make sure network access and security group rules are working as expected.

Splunk has other inputs to poll data from AWS for visualizations. This push solution can be applied to other CloudWatch logs and Splunk graphs, reports, alerts and dashboards can all be easily generated.

{"GetBlogPosts-return":{"author-list": [{"Name":"Kiran Gekkula","Short_Description":"Manager, Technical Architecture","Long_Description":"Kiran Gekkula is an experienced AWS Solutions Architect at Accenture and has been working closely with customers to simplify enterprise Cloud adaption, enhancement and operations. He has played a key role in migrating ERP applications, Microservices, DevOps, Big Data and Analytics. Kiran is based out of Dallas, Texas.","Url_Bio":"","Url_Image":"//www.accenture.com/t20180426T023745Z__w__/us-en/_acnmedia/Accenture/Conversion-Assets/Blogs/Images/35/Accenture-Kiran-Gekkula-Headshot.png","Social_Linked_In":"https://www.linkedin.com/in/kiran-gekkula-297459115","Social_Twitter":"https://twitter.com/intent/user?screen_name=","Social_Email":"mailto:","AuthorId":"{2E3E1EC2-4C6D-4926-8241-8D46667FBBE2}"}], "tag-list": [],"recentpost-list": [{"PostTitle":"In control: Analyze VPC flow logs from multiple AWS accounts with Kinesis Firehose and Splunk","PostUrl":"/us-en/blogs/blogs-kiran-gekkula-vpc-flow-logs-analysis-aws"}], "year-published-list": [{"year":"2018"},{"year":"2017"},{"year":"2016"},{"year":"2015"}], "blog-archive-link": ""}}