It’s time to announce the 4th episode of Gluent New World webinar series by James Morle! James is a database/storage visionary and has been actively contributing to Oracle database scene for over 20 years – including his unique book Scaling Oracle 8i that gave a full-stack overview of how different layers of your database platform worked and performed together.

The topic for this webinar is:

When the Rules Change: Next Generation Oracle Database Architectures using Super-Fast Storage

Speaker:

James Morle has been working in the high performance database market for 25 years, most of which has been spent working with the Oracle database. After 15 years running Scale Abilities in the UK, he now leads the Oracle Solutions at DSSD/EMC in Menlo Park.

Time:

Tue, Jun 21, 2016 12:00 PM – 1:15 PM CDT

Abstract:

When enabled with revolutionary storage performance capabilities, it becomes possible to think differently about physical database architecture. Massive consolidation, simplified data architectures, more data agility and reduced management overhead. This presentation, based on the DSSD D5 platform, includes performance and cost comparison with other platforms and shows how extreme performance is not only for extreme workloads.

It’s time to announce the 3rd episode of Gluent New World webinar series! This time Gwen Shapira will talk about Kafka as a key data infrastructure component of a modern enterprise. And I will ask questions from an old database guy’s viewpoint :)

Gwen is a system architect at Confluent helping customers achieve
success with their Apache Kafka implementation. She has 15 years of
experience working with code and customers to build scalable data
architectures, integrating relational and big data technologies. She
currently specializes in building real-time reliable data processing
pipelines using Apache Kafka. Gwen is an Oracle Ace Director, an
author of “Hadoop Application Architectures”, and a frequent presenter
at industry conferences. Gwen is also a committer on the Apache Kafka
and Apache Sqoop projects. When Gwen isn’t coding or building data
pipelines, you can find her pedaling on her bike exploring the roads
and trails of California, and beyond.

Time:

Tue, May 24, 2016 12:00 PM – 1:15 PM CDT

Abstract:

Modern businesses have data at their core, and this data is
changing continuously. How can we harness this torrent of continuously
changing data in real time? The answer is stream processing, and one
system that has become a core hub for streaming data is Apache Kafka.This presentation will give a brief introduction to Apache Kafka and
describe it’s usage as a platform for streaming data. It will explain
how Kafka serves as a foundation for both streaming data pipelines and
applications that consume and process real-time data streams. It will
introduce some of the newer components of Kafka that help make this
possible, including Kafka Connect, framework for capturing continuous
data streams, and Kafka Streams, a lightweight stream processing
library. Finally it will describe some of our favorite use-cases of
stream processing and how they solved interesting data scalability
challenges.

The Gluent New World webinar series is about modern data management: architectural trends in enterprise IT and technical fundamentals behind them.

GNW02: SQL-on-Hadoop : A bit of History, Current State-of-the-Art, and Looking towards the Future

Speaker:

This GNW episode is presented by no other than Mark Rittman, the co-founder & CTO of Rittman Mead and an all-around guru of enterprise BI!

Time:

Tue, Apr 19, 2016 12:00 PM – 1:15 PM CDT

Abstract:

Hadoop and NoSQL platforms initially focused on Java developers and slow but massively-scalable MapReduce jobs as an alternative to high-end but limited-scale analytics RDBMS engines. Apache Hive opened-up Hadoop to non-programmers by adding a SQL query engine and relational-style metadata layered over raw HDFS storage, and since then open-source initiatives such as Hive Stinger, Cloudera Impala and Apache Drill along with proprietary solutions from closed-source vendors have extended SQL-on-Hadoop’s capabilities into areas such as low-latency ad-hoc queries, ACID-compliant transactions and schema-less data discovery – at massive scale and with compelling economics.

In this session we’ll focus on technical foundations around SQL-on-Hadoop, first reviewing the basic platform Apache Hive provides and then looking in more detail at how ad-hoc querying, ACID-compliant transactions and data discovery engines work along with more specialised underlying storage that each now work best with – and we’ll take a look to the future to see how SQL querying, data integration and analytics are likely to come together in the next five years to make Hadoop the default platform running mixed old-world/new-world analytics workloads.

As Gluent is all about gluing together the old world and new world in enterprises, it’s time to announce the Gluent New World webinar series!

The Gluent New World sessions cover the important technical details behind new advancements in enterprise technologies that are arriving into mainstream use.

These seminars help you to stay current with the major technology changes that are inevitably arriving into your company soon (if not already). You can make informed decisions about what to learn next – to still be relevant in your profession also 5 years from now.

Think about software-defined storage, open data formats, cloud processing, in-memory computation, direct attached storage, all-flash and distributed stream processing – and this is just a start!

The speakers of this series are technical experts in their field – able to explain in detail how the technology works internally, which fundamental changes in the technology world have enabled these advancements and why it matters to all of you (not just the Googles and Facebooks out there).

I picked myself as the speaker for the first event in this series:

Gluent New World: In-Memory Processing for Databases

In this session, Tanel Poder will explain how the new CPU cache-efficient data processing methods help to radically speed up data processing workloads – on data stored in RAM and also read from disk! This is a technical session about internal CPU efficiency and cache-friendly data structures, using Oracle Database and Apache Arrow as examples.