Now let’s upload some data and trigger an alert to see StreamAlert in action! This example uses
SNS for both sending the log data and receiving the alert, but StreamAlert also supports many other
data sources and alert outputs.

If you look at conf/outputs.json, you’ll notice that the SNS topic was automatically added.

7. Configure a rule to send to the alerts topic.
We will use rules/community/cloudtrail/cloudtrail_root_account_usage.py as an example, which
alerts on any usage of the root AWS account. Change the rule decorator to:

# Hook the streamalert-test-data SNS topic up to the StreamAlert rule processor
./manage.py terraform build
# Deploy a new version of all of the Lambda functions with the updated rule and config files
./manage.py lambda deploy -p all

Note

Use terraformbuild and lambdadeploy to apply any changes to StreamAlert’s
configuration or Lambda functions, respectively. Some changes (like this example) require both.

Time to test! Create a file named cloudtrail-root.json with the following contents:

If all goes well, an alert should arrive in your inbox within a few minutes!
If not, look for any errors in the CloudWatch Logs for the StreamAlert Lambda functions.

10. After 10 minutes (the default refresh interval), the alert will also be searchable from
AWS Athena. Select your StreamAlert database in the
dropdown on the left and preview the alerts table:

(Here, my name prefix is testv2.) If no records are returned, look for errors
in the athena_partition_refresh function or try invoking it directly.

And there you have it! Ingested log data is parsed, classified, and scanned by the StreamAlert rules
engine and any resulting alerts are delivered to your configured output(s) within a matter of minutes.