Events

Filter:

Delta Lake is an open source storage layer that brings reliability to data lakes. In this webinar, you will have the opportunity to hear directly from, and ask questions to, Michael Armbrust, the lead engineer who created Delta Lake.

Join our webinar on Thursday 23rd May, 4pm BST, to learn:
How your organization can leverage real-time, streaming analytics to capitalize on time sensitive opportunities, meet customer demands, and reduce operational risks. How to tackle the architectural and business challenges of transitioning from batch to real-time analytics. How real-time analytics have helped AWS customer Quby. How Amazon Kinesis and APN Partner solutions can help your organization quickly and cost-effectively implement a real-time analytics strategy.

Join the AWS Summit in Stockholm, meet the Databricks team and learn how Databricks is accelerating innovation on AWS. Stop by our stand to talk to our team of specialists, watch a demo or just pick up a cool swag! Register now!

In the next Apache Spark meetup, we are hosting Henning Kropp from Databricks, the company founded by the original authors of Apache Spark. Henning will give us an introduction about managing the machine learning lifecycle with MLflow & MLlib. Besides Henning, Mate Gulyas will talk about Apache Spark performance best practices. If you are interested in Apache Spark, don't miss this awesome event!
// SCHEDULE
6:00pm - Doors open
6:30pm - Talks
8:00pm - Networking

Do you need to speed up building machine learning models and putting them into production? Databricks helps you develop, train, and tune accurate models faster. Get insights faster by collaborating via shared notebooks between multiple analysts and data scientists. Run cutting-edge machine learning on larger data sets, leveraging the increased speed and scale enabled by MLlib’s algorithms, which are optimized for parallelization.

In this free 2 hour online training, we’ll show you how you can use MLlib and MLflow with Databricks to train your own models, run reproducible experiments, and deploy into production with fewer failing jobs.

In this workshop, we’ll cover best practices for enterprises to use powerful open source technologies to simplify and scale your ML efforts. We’ll discuss how to leverage Apache Spark™, the de-facto data processing and analytics engine in enterprises today, for data preparation as it unifies data at massive scale across various sources. You’ll also learn how to use ML frameworks (i.e. Tensorflow, XGBoost, Scikit-Learn, etc.) to train models based on different requirements. And finally, you can learn how to use MLflow to track experiment runs between multiple users within a reproducible environment, and manage the deployment of models to production on Amazon SageMaker.

Join us for this Parisian event, hosted at the Databricks Loft. A full day event where our team of experts will cover: Databricks, the Unified Analytics Platform, Databricks Delta, Machine Learning, through a mix of presentations and demos.
Save your spot!

In this workshop, we’ll cover best practices for enterprises to use powerful open source technologies to simplify and scale your ML efforts. We’ll discuss how to leverage Apache Spark™, the de-facto data processing and analytics engine in enterprises today, for data preparation as it unifies data at massive scale across various sources. You’ll also learn how to use ML frameworks (i.e. Tensorflow, XGBoost, Scikit-Learn, etc.) to train models based on different requirements. And finally, you can learn how to use MLflow to track experiment runs between multiple users within a reproducible environment, and manage the deployment of models to production on Amazon Sagemaker.

In this workshop, we’ll cover best practices for enterprises to use powerful open source technologies to simplify and scale your ML efforts. We’ll discuss how to leverage Apache Spark™, the de-facto data processing and analytics engine in enterprises today, for data preparation as it unifies data at massive scale across various sources. You’ll learn how to use ML frameworks (i.e. Tensorflow, XGBoost, Scikit-Learn, etc.) to train models based on different requirements. And finally, you can learn how to use MLflow to track experiment runs between multiple users within a reproducible environment and manage the deployment of models to production.