Amazon S3 Ingestion (Manual Setup)

Loggly can automatically retrieve new log files added to your S3 bucket(s). Our service supports logs from ELB, ALB, Cloudfront, as well as any uncompressed line-separated text files. Loggly provides a script to configure your account for S3 ingestion using the Amazon SQS service automatically. This guide is for people who prefer to manually configure their Amazon account themselves. It takes a little bit more work to set up, but you can review and control each step yourself.

It works by listening for events from Amazon that a new object has been created in your bucket. To make the process of sending events reliable, we send them through Amazon’s Simple Queue Service (SQS), which saves the event until we can retrieve it. When we receive the notification, we will download the log file and ingest it into Loggly.

Note: S3 Ingestion has a limit of 1GB Max S3 File size. If file exceeds 1GB, we are going to skip it.

Supported file formats: .txt, .gz, .json.gz, .zip, .log.

Adding a new AWS source

To set up S3 ingestion using Amazon SQS service, proceed to the “Source Setup” -> “S3 Sources” tab and then click on the “Add New” button.

Now click the “Manual” tab to see the form that needs to be completed. The instructions below will assist you in filling the form. Basically, you need to allow Loggly to read from your chosen S3 bucket and notify Loggly of new objects created in the same bucket.

Step 1

Amazon Simple Queue Service (SQS) is a fast, reliable, scalable, fully managed message queuing service. SQS makes it simple and cost-effective to decouple the components of a cloud application. You can use SQS to transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be always available.

This is how it works. Whenever a new object is created in S3 buckets, S3 fires ObjectCreated events to the SQS queue. Loggly will then retrieve that notification from the queue, which contains the key and bucket of the S3 object, and then download that object from S3 using an access key and secret access key that you will provide. Please note, the objects added previously will not be sent to the SQS queue.

A. Create a SQS queue manually

Go to the AWS console and -> Select SQS from the Services drop-down:

Create a New Queue or select an existing queue that is dedicated to Loggly:

A default region will be selected automatically. The SQS queue needs to be in the same region as the S3 bucket. You can check the S3 bucket region in bucket properties as shown below:

You can change the region of the SQS queue from the drop down menu located on the right side of the toolbar:

B. Add permissions to the SQS queue

After creating the queue, select it from the table and go to the permissions tab and click on Edit Policy Document:

An editor window will open, then, paste then copy and paste the JSON below:

C. Configure your S3 bucket to send ObjectCreated events to SQS queue

Select the bucket you put in the SQS policy and right click and select Properties:

Expand the Events section and under Events, Select “ObjectCreated (All)” from the drop down and select the SQS queue you just created from the SQS queue drop down:

D. Grant permissions to Loggly to read from your S3 bucket

Loggly will need permission to pull the log data from your S3 bucket. The easiest way to accomplish this is by creating a new IAM user on your account. The new user will only have permission to read from the S3 bucket.

Step 2

Enter the access credentials for the user you just created, including the AWS access key and secret keys:

Step 2.1

Enter the AWS account number:

Step 3

Choose the customer token you would like to use to send the logs to Loggly.

If you have multiple active tokens, then choose the customer token you would like to use to send the logs to Loggly. For example, select the appropriate token from the dropdown field. If you have only one active token, then that token will be used as default. Therefore, this step will not presented on the page if you have one active token:

Step 4

Enter the name of your SQS queue if you would like Loggly to receive notifications of new objects added to the bucket:

Step 4.1

Enter the S3 bucket name. As an option you can provide a Prefix also. A prefix operates similarly to a folder. If you add a prefix here then only keys (or files) that are in that folder will be ingested by Loggly. The prefix can also contain multiple folders separated by slashes, for example “loggly/2017/01”:

Note: One prefix per bucket is allowed, if you change the prefix then the keys with the new prefix will be ingested.

Step 5

You may optionally provide one or more comma-separated tags that describe your data and make it easier to search in Loggly:

Click save after you have entered the information. You will then return to the AWS Sources page and you will see a green checkmark under the status column if the configuration was successful.

Troubleshooting S3 Ingestion (Manual Setup)

If you don’t see any data show up in the search tab, then check for these common problems.