Simplified Deployment

The SkaLogs Platform consists of a bundle – SkaLogs Bundle (GitHub repo) – which deploys many services, and scales them according to the allocated self-hosted ressources (cloud, on-premise). The entire platform is deployed via a few Ansible scripts which:

ALERTS - NOTIFICATIONS

Define customized thresholds based on computed metrics. Create alerts and notifications based on thresholds and events

ELASTICSEARCH

We manage the Elasticsearch cluster, and optimize the data replication and partitioning by optimizing shards across your ES instances

CAPACITY: VOLUME & VELOCITY

Very high capacity in terms of daily ingested volume, total volume, and velocity. Daily: up to 10 TB/day. Total volume up to 4 PB of raw data without replication. Speed: +100 K EPS (events per second or json documents per second)

VISUALIZATION - DASHBOARD

Predefined and customizable Dashboard and Templates, with Charts and Indicators (Infrastructure and Applications). Technical metrics, KPIs.

SkaETL Features

Pronounced “skettle”, SkaETL is a unique real time Open Source ETL designed for and dedicated to Log processing and transformation. It is an innovative approach to data ingestion and transformation, with computing, monitoring, and alerting capabilities based on user defined thresholds. SkaETL parses and enhances data from Kafka topics to any output: {Kafka enhanced topics, Elasticsearch, more to come}. SkaETL provides guided workflows simplifying the complex task of importing any kind of machine data. Sample workflows: data ingestion pipelines, grok parsing simulations, metric computations, referential creation, Kafka live stream monitors.

ERROR HANDLING

Error retry mechanism for log ingestion and parsing enabling you to recover from any downtime and prevent data loss. Process several streams of data simultaneously (retry queues, live queues) without having to manage them

GROK PATTERN SIMULATION

Guided workflow for Log parsing via grok patterns. Simulate the result of grok patterns on ingested Logs and validate the Log transformation and normalization process. Large set of pre-defined grok patterns

LOGSTASH CONFIGURATION

Generate complex Logstash configurations via guided workflow. Once your ingestion and transformation workflow is complete, with a simple button, click you can generate any Logstash conf. file, no matter how complex the Log transformation

REFERENTIALS

Build data referential on the fly based on events processed by SkaETL. Guided workflow to create referentials for further re-use. Allows to fine-tune analysis and avoid re-processing.

MONITORING - ALERTS

Real time monitoring, alerting, and notifications based on events and user defined thresholds. Define at least one output from your ingestion process, and create multiple outputs to email, Slack, snmp, system_out