Data Analytics
Still sending your data scientists to the cloud?
Tell Me More

Data What?

The availability of real-time data from sensors and processes enables businesses for continuously monitoring their assets, as well as identifying patterns in recurring failures and consequently understand premonitory signals to effectively prevent them. This adds a complexity which demands specific technologies and platforms: to achieve large-scale ingestion, persistent storage and computation of diverse data-sources under a number of constraints and use cases. Besides, the availability of cloud technologies has eased the setup of development and production environments for analysing massive datasets, which has therefore enabled data scientists to fully explore their datasets without concerning too much of the underlying distributed system.

So what's the problem?

Despite its enormous potential, Data Science is a hype, let's face it. Moved by their ambitions, lots of companies started working on this topic but only a few were really successfull. The main barrier is the gap between the expectations of the stakeholders and the actual value delivered by models, as well as the lack of information over incoming data, in terms of both data quality and the processes producing them. Besides, analytics projects require a very interdisciplinary team, encompassing system administrators, engineers, scientists, as well as domain experts. To succeed, this requires a significant investment and a clear strategy.

Typically, projects are developed over a so-called lambda architecture, which combines a streaming layer to a batch one.

This raises the amount of complexity. Whereas continuous-integration and deployment (CICD) can automate and speed up to a great extent (using unit and integration tests, as well as frequent releases) the software development cycle, generally data scientists tend to work in a different workflow, and are often operating aside the rest of the team with consequent information gaps and unexpected behaviors upon changes on the data they use and the models they produced.

Waste of resources? The norm!

We have a solution:

Embracing DataOps practices to deal with data and models as we deal with code.

Scalable Data Processing

Scale and automate data processing and analysis algorithms through open-source frameworks. Improve your software architecture with more scalable, maintanable and monitorable services. Audit or migrate your existing infrastructure towards stream-oriented analytics and distributed processing.

Innovative algorithms

Explore and assess the viability of new algorithms from raw data and initial hypotheses. Build innovative data-driven products that yield business insights or provide customers with value-added services. We know how much of a burden developing can be. In our development process data and models are fist-class citizens. We aim at a fully automated data quality assessment, as well as reproducible, comparable and standardized models.

Consultancy & Training

Throughout your analytics journey, involve experienced consultants for a custom onsite training - from the basics of distributed computing, data mining and machine learning to more advanced topics such as performance or hyperparameter tuning. Discover cutting-edge technologies. Get advised on the best steps to undertake to improve your current solution.

So how do I start?

We did set up a modular architecture based on Kubernetes to let you have advantage of the best open source technologies for analytics. The architecture follows DataOps practices and provides everything for ingestion, persistent storage, storage, distributed processing, data exploration, business intelligence dashboards, machine learning, model benchmarking and project management, as well as ML model serving and of course infrastructure monitoring. We decided to release a development version of the architecture as an open-source project. So just boot it up and start developing your application!

Some of the technologies being used

Contact