To test a new rule, first create a new JSON file anywhere within the ‘tests/integration/rules’ directory with the .json extension.

This file should contain the following structure:

[{"data":"Either a string, or JSON object","description":"This test should trigger or not trigger an alert","log":"The log name declared in logs.json","service":"The service sending the log - kinesis, s3, sns, or stream_alert_app","source":"The exact resource which sent the log - kinesis stream name, s3 bucket ID, SNS topic name, or stream_alert_app_function name","trigger_rules":["list of rule names which should generate alerts","another potential rule name could go here"]}]

Note

Multiple tests can be included in one file by adding them to the array above.

When specifying the test data, it can be either of two fields:

"data": An entire example record, with all necessary fields to properly classify

"override_record": A subset of the example record, where only relevant fields are populated

The advantage of option #2 is that the overall test event is much smaller.

The testing framework will auto-populate the records behind the scenes with the remaining fields for that given log type.

Let’s say a rule is only checking the value of source in the test event. In that case, there’s no added benefit to fill in all the other data. Here is what the event would look like with override_record:

Tests are run via the manage.py script. These tests include the ability to validate rules for
accuracy, or send alerts to outputs to verify proper configuration.

When adding new rules, it is only necessary to run tests for the rule processor. If making code changes to the alert
processor, such as adding a new output integration to send alerts to, tests for the alert processor should also be performed.

To run integration tests for the rule processor:

$ python manage.py lambda test --processor rule

To run integration tests for the alert processor:

$ python manage.py lambda test --processor alert

To run end-to-end integration tests for both processors:

$ python manage.py lambda test --processor all

Integration tests can be restricted to specific rules to reduce time and output:

Integration tests can send live test alerts to configured outputs for rules using a specified cluster.
This can also be combined with an optional list of rules to use for tests (using the --rules argument):

$ python manage.py live-test --cluster <cluster_name>

Here is a sample command showing how to run tests against two rules included as integration tests in the default StreamAlert configuration:

In some cases, there may be incoming logs to StreamAlert with a known type, but without specific rules that apply to them.
However, it is best practice to write schemas for these logs and verify that they are valid.

This is possible by first adding the new schema(s) to conf/logs.json along with creation of test record(s) in tests/integration/rules/
containing samples of real logs (without actually adding a corresponding rule). Running the manage.py script with the validate-schemas
option will iterate over all json test files and attempt to classify each record.

To run schema validation on all test files:

$ python manage.py validate-schemas

To run schema validation on a specific test file within tests/integration/rules/: