New Service Launch - Dflow

09 Oct 2019 -
1-2 minutes read

We are very pleased to announce the launch of a new service, Dflow, a custom, cloud-native data pipeline as a service.

What’s a data pipeline?

A software solution that automates data transportation and processing from one system to another, eliminating manual steps and human intervention.
At a very high level, the data pipeline consists of the following steps:

processing of the input data (joining, aggregation, alteration, etc.) according to the needs of the final user

delivery of the final output to the destination system which can be a data warehouse, data lake, an application, etc.
It is optional, but also recommended, to implement a monitoring system to analyze the performance of the pipeline, amount of data ingested, quality of the output, etc.

What we offer

We create the data pipeline according to your specifications and needs, install and configure it on a cloud instance, and connect it to the origin and destination systems. It is fully managed by us so you don’t have to worry about environment setup, installation, dependencies, and other infrastructural aspects.
The solution is also scalable so, when your systems start generating more data, we can supplement the processing power in no time at all.

What about the costs?

You will pay an initial fee that we agree upon together, for the realization of the data pipeline software. The cost largely depends on the complexity of the algorithm and the time necessary to create and test the program.
After that, you will only be billed with a flat monthly fee for the maintenance and hosting of your pipeline.

Why does it interest you?

You will have a fully managed, cloud-hosted, scalable solution for your data, that works 24/7 and eliminates the necessity of human supervision and interaction.

Drop us an email at contact@cubevodata.com and we will be more than happy to set up a call to supply you with more information.