Big Data, Fast Data, Valuable Data

Many organizations are struggling to combine effective information management with analytics to identify the best data opportunities and convert them into measureable business value. With Business Data Lake solutions, you are able to provide a fully engineered, enterprise-grade data lake tuned to the strategic priorities of your business that can help you realize the value of data analytics in as little as one week. Generate new insights and scale-out rapidly and securely.

RIDE supports developing in notebooks, editor, RMarkdown, shiny app, Bokeh and other frameworks. Supported by R-Brain’s optimized kernels, R and Python 3 have full language support, IntelliSense, debugger and data view. Autocomplete and content assistant are available for SQL and Python 2 kernels. Spark (standalone) and Tesnsorflow images are also provided.

Using Docker in managing workspaces, this platform provides an enhanced secure and stable development environment for users with a powerful admin control for controlling resources and level of access including memory usage, CPU usage, and Idle time.

The latest stable version of IDE is always available for all users without any need of upgrading or additional DevOps work. R-Brain also delivers customized development environment for organizations who are able to set up their own Docker registry to use their customized images.

The RIDE Platform is a turnkey solution that increases efficiency in your data science projects by enabling data science teams to work collaboratively without a need to switch between tools. Explore and visualize data, share analyses, all in one IDE with root access, connection to git repositories and databases.

IT is a key player in the digital and cognitive transformation of business processes delivering solutions for improved business value with analytics. This session will step by step explain the journey to secure production while adopting new analytics technologies leveraging mainframe core business assets

Data Scientists are rare and highly valued individuals, and for good reason: making sense of data, and using the machine learning libraries requires an unusual blend of advanced skills. Why is it then that Data Scientists spend the majority of their time getting data ready for models, and a fraction actually doing the high value work?

In this talk we introduce the concept of Data Fabric, a new way to provide a self-service model for data, where data scientists can easily discover, curate, share, and accelerate data analysis using Python, R, and visualization tools, no matter where the data is managed, no matter the structure, and no matter the size.

We will talk through the role of Apache Arrow, the in-memory columnar data standard that is accelerating analytics for GPU-based processing, as well as the role of Pandas and Arrow in providing unprecedented speed in accessing datasets from Python.

Researchers generate huge amounts of valuable unstructured data and articles from research every day. The potential for this information is huge: cancer and pharmaceutical breakthroughs, advances in technology and cultural research that can improve the world we live in.

This webinar discusses how text mining and Machine Learning can be used to make connections across this broad range of files and help drive innovation and research. We discuss using Kubernetes microservices to analyse the data and then applying Machine Learning and graph databases to simplify the reuse of the data.

The data economy and digital technologies are deeply transforming almost all areas of our lives. One of the most heavily transformed revolve around insurance and healthcare with a number of really interesting development possibly redefining the way we take care of ourselves and the way we consumer and use insurance as well.

From harnessing the power of data to better help mental health patients, carers and medical personnel with their treatments to assessing the risk of developing broad range of illnesses and engaging better with users to propose them personalised healthy life plans to using big data and analytics to track down and prepare for epidemics to using data to better cover cars and drivers with car insurances and finally using social media data for insurers to better engage with customers, this webinar will propose a fascinating exploration of the opportunities, risks, new models supporting the digital transformation in banking.

Public cloud deployments have become irresistible in terms of flexibility, low barriers to entry, security, and developer friendliness. But the sheer inertia of traditional data lakes make them difficult to transition to cloud. In this talk we'll look at examples of how leading companies have made the transition using open source technologies and hybrid strategies.

Instead of following a "lift and shift" strategy for moving data lake workloads to the cloud, there are new considerations unique to cloud that should be considered alongside traditional approaches related to compute (eg, GPU, FPGA), storage (object store vs. file store), integrations, and security.

Viewers will take away techniques they can immediately apply to their own projects.

The concept of Data lakes evolved to address challenges and opportunities in managing big data.

Organizations are investing massive amounts of time and money to upgrade existing data infrastructures and build data lakes whether on-premises or in the cloud.

This talk will discuss architectures and design options to implement data lakes with open source tools. Also covered are challenges of upgrade & migration from existing data warehouses, metadata management, supporting self-service and managing production deployments.

As an Enterprise customer, you are potentially using IBM Z in a hybrid cloud implementation. Let's understand how to benefit from cloud access to mainframe data without moving it outside z; thereby improving security, reducing integration challenges and answering your GDPR auditor's needs.

Iver van de Zand will talk and demo on the latest SAP innovations for analytics in the cloud. Keywords are live connectivity and the closed loop of combined business intelligence, planning and predictive analytics all in one environment. Fully ready and prepared for big data.

David Siegel, Blockchain, decentralization and business agility expert

Still confused about this whole Blockchain thing? Interested in investing in digital currencies, but not sure where to start? Want to get a better idea of the threats and opportunities?

David Siegel is a Blockchain, decentralization and business agility expert who has been a high-level management & strategy consultant to companies like Sony, Hewlett Packard, Amazon, NASA, Intel, and many start-ups. David has been praised for being able to explain Blockchain in the most simple and interesting way.

What you will learn:
-What is Bitcoin?
-What is the blockchain?
-What is Ethereum? What is Ether?
-What is a distributed application?
-What is a smart contract?
-What is a triple ledger?
-What about identity and security?
-What business models are at risk?
-What are the opportunities?
-What should we do?

Today the payments industry faces a rebirth by necessity. Financial institutions process massive volumes of customer and payments transaction data, much of it unstructured and untapped.

Cognitive Systems have the ability to understand, reason and learn. In Financial Services applying cognitive capabilities to real world payments issues like safer and faster payments is yielding significant results. Furthermore Risk and Compliance and segment of one engagement are areas where ROI is tremendous when leveraging advanced analytics and artificial intelligence in cohesion.

Learn from real world use cases of how financial institutions globally have gained significant competitive advantage by becoming a truly Cognitive Bank.

HDFS on Kubernetes: Lessons Learned is a webinar presentation intended for software engineers, developers, and technical leads who develop Spark applications and are interested in running Spark on Kubernetes. Pepperdata has been exploring Kubernetes as potential Big Data platform with several other companies as part of a joint open source project.

Data visualization must be intuitive in order for non-IT business leaders to see data patterns. Representing data in a graphical or pictorial format is easy, but constructing the data in the best and most logical way can be tricky.

In this session, Umesh will talk about how to represent data simply to make quicker and better business decisions. He will walk through several data visualization techniques through business cases and examples. By the end of the session, you will not only know different data visualization techniques, but also have an understanding of circumstances under which each technique should be used and the best way to represent particular data sets for different business cases.

Predictive Analytics - everyone is talking about it and many organisations claim to be doing it. But are they? And what insights do they gain to then make tactical or strategic changes? How can analysts work with decision makers by sharing results in a visually effective and meaningful way while also informing them about possible courses of action?

This webinar is presented by Andy Kriebel, Head Coach at the Data School and Eva Murray, Tableau Evangelist at Exasol. Our guest speaker on Predictive Analytics is Benedetta Tagliaferri, Consulting Analyst at The Information Lab.

The webinar will look at some examples of predictive analysis and will show data visualization examples that are actionable and can drive further questions and discussions in an organisation.

Looking to take your graphs to the next level? Want to make sure you choose the right visualization? Plagued by the challenges of geospatial heat maps?

Get your questions ready and join this session where data experts Carl and Brett will go over the common questions they get asked and answer all the data visualization issues you've been plagued with, including how to:

-Use location-based data to put your visualization on the map
-Uncover new relationships, patterns and opportunities
-Identify emerging trends
-Answering comparative business questions with set analysis
-Understand best practices for creating an aesthetically-pleasing and useful visualization

When analysis needs to be used by decision makers that didn’t create it, the communication of the information and the message it conveys becomes critical. There is a plethora of ways to layout reports and dashboards, even within a single company.
Enter the SUCCESS formula, that “lightbulb” moment.

Introduced by the IBCS Association (International Business Communication Standards) the SUCCESS formula provides conceptual, perceptual and semantic rules that enable faster, better, and less-costly results in all stages of business communications and decision-making processes.

This webinar will introduce the 7 Rules of SUCCESS that provides a toolkit to aid analysts in designing their visualisations for better reach and decisions in their target audience.

The webinar will also introduce The Philips journey to implementing IBCS principles in their global "Accelerate!” Initiative.