We help companies with end to end AI, big data strategy and implementation. We work either in staff augmentation model or project basis. Our corporate office is in Munich and development center is in Eastern Europe.

SERVICES

Computer Vision

Image classification (localization)

Object detection

Image segmentations

Visual similarity

Classical image processing

Image processing – Feature extraction from image

Natural language processing

Section natural language processing

Text classification

Generate tags and captions

Chat bots

Social Media

Social media analysis and gathering insights for social sentiment

Other

Time-series forecasting and prediction

Electrical energy consumption forecasting

Stock market trend prediction

Streaming data analysis (Equipment monitoring)

PRODUCTS

CAM – Web Analytics Platform

This solution is a deep dive into customer behavior and customer analytics, with the main focus on identifying the next best offer towards the client. Using various algorithms and mechanisms we are trying to get the most information out of the data in real time and learn more about the customer.
– On premise
– Big data or small data
– GDPR complied
– Highly adaptable

How it works

TECHNOLOGIES

Spark is a fast and general processing engine. It makes data processing very fast on large volumes and cheap at the same time. It is an open source tool, and probably the most popular one for large scale data.

Databricks makes running Spark easy, and it significantly improves our productivity. It makes Spark deployment and cluster management their job, so we can easily focus on our business logic, needs, and data.

SAS is a software suite for advanced analytics, multivariate analyses, business intelligence, data management, and predictive analytics. It provides a high quality enterprise software, with more than 40 years of experience with data.

Python is one of most popular programming languages for Data Science. It makes development fast and cheap. With considerable number of libraries it can efficiently answer to most data challenges, plus it is easy to learn.

Sklearn and Pandas are top of the shelf Python libraries for data science. Pandas provides rich data structures to represent the data, and aggregate on variety of functions. Once we know the data, we can train machine learning models with Sklearn.

Apache Hive is a Data Warehouse tool for Hadoop stack. The general purpose of Hive is to represent various data structures and types as table structure, query and analyze the tables using standard SQL queries, and interact with relational databases.

Relational Databases are an important part of most data solutions and platforms. They are very well optimized for persistence and analytics of the structured datasets. Most reporting systems are built on top of relational databases.

Hadoop is an open source framework for storing and processing large volumes of data. It makes storing massive amounts of data cheap, and easy to manipulate, process and analyze at the same time.

Elastic provides a great toolset to easily access, search, explore and visualize various data types. Full text search, log analysis and visualising data has never been easier. It really stands off when combined with other tools.

AWS stack of technologies provides various tools to build high quality data applications in cloud based environment, with increased security policies. Using this toolset, we can focus directly on our challenges, and spend less time on technical issues.

Django offers a complete high-level web framework that encourages rapid development and clean, pragmatic design. It is well suited to build custom platform managers, control centers and custom reports and dashboards.

Tableau is software that allows us to visually explore our data and make beautiful dashboards and reports. It is really convenient to display reports on mobile devices, so we can have our analytics in our pocket at any time.

Amazon Redshift is a Data Warehouse tool that is hosted on Amazon Web Services Platform. It is able to handle analytics workloads of large scale datasets. It is a really convenient tool to use for analytics on structured data in cloud environment.

NoSQL technologies are convenient in use cases when the structure of the data is not relational, or not very rigid. We use them often for the realtime web applications, since they are usually capable to handle large workloads.