The Big BI Dilemma - Bimodal Logical Data Warehouse to the Rescue!

The classic unimodal data warehouse architecture has expired because it is restricted to primarily supporting structured data but not the newer data types such as social, streaming, and IoT data. New BI architecture, such as “logical data warehouse”, is required to augment the traditional and rigid unimodal data warehouse systems with a new bimodal data warehouse architecture to support requirements that are experimental, flexible, explorative, and self-service oriented.

Learn from the Logical Data Warehousing expert, Rick van der Lans, about how you can implement an agile data strategy using a bimodal Logical Data Warehouse architecture.
In this webinar, you will learn:

In the storage world, NVMe™ is arguably the hottest thing going right now. Go to any storage conference – either vendor- or vendor-neutral, and you’ll see NVMe as the latest and greatest innovation. It stands to reason, then, that when you want to run NVMe over a network, you need to understand NVMe over Fabrics (NVMe-oF).

TCP – the long-standing mainstay of networking – is the newest transport technology to be approved by the NVM Express organization. This can mean really good things for storage and storage networking – but what are the tradeoffs?

In this webinar, the lead author of the NVMe/TCP specification, Sagi Grimberg, and J Metz, member of the SNIA and NVMe Boards of Directors, will discuss:
•What is NVMe/TCP
•How NVMe/TCP works
•What are the trade-offs?
•What should network administrators know?
•What kind of expectations are realistic?
•What technologies can make NVMe/TCP work better?
•And more…

Join SSSI members and respected analysts Tom Coughlin and Jim Handy for a look into their new Emerging Memory and Storage Technologies Report. Tom and Jim will examine emerging memory technologies and their interaction with standard memories, how a new memory layer improves computer performance, and the technical advantages and economies of scale that contribute to the enthusiasm for emerging memories. They will provide an outlook on market projections and enabling and driving applications. The webcast is the perfect preparation for the 2019 SNIA Persistent Memory Summit January 24, 2019.

If you’re a storage equipment vendor, management software vendor or end-user of the ISO approved SNIA Storage Management Initiative Specification (SMI-S), you won’t want to miss this presentation. Enterprise storage industry expert Mike Walker will provide an overview of new indications, methods, properties and profiles of SMI-S 1.7 and the newly introduced version, SMI-S 1.8. If you haven’t yet made the jump to SMI-S 1.7, Walker will explain why it’s important to go directly to SMI-S 1.8.

The Long Term Retention Technical Working Group and the Data Protection Committee will review the results of the 2017 100-year archive survey. In addition to the survey results, the presentation will cover the following topics:
· How the use of storage for archiving has evolved in ten years
· What type of information is now being retained and for how long
· Changes in corporate practices
· Impact of technology changes such as Cloud

In the history of enterprise storage there has been a trend to move from local storage to centralized, networked storage. Customers found that networked storage provided higher utilization, centralized and hence cheaper management, easier failover, and simplified data protection, which has driven the move to FC-SAN, iSCSI, NAS and object storage.

Recently, distributed storage has become more popular where storage lives in multiple locations but can still be shared. Advantages of distributed storage include the ability to scale-up performance and capacity simultaneously and--in the hyperconverged use case--to use each node (server) for both compute and storage. Attend this webcast to learn about:
•Pros and cons of centralized vs. distributed storage
•Typical use cases for centralized and distributed storage
•How distributed works for SAN, NAS, parallel file systems, and object storage
•How hyperconverged has introduced a new way of consuming storage

After the webcast, please check out our Q&A blog http://bit.ly/2xSajxJ

Network-intensive applications, like networked storage or clustered computing, require a network infrastructure with high bandwidth and low latency. Remote Direct Memory Access (RDMA) supports zero-copy data transfers by enabling movement of data directly to or from application memory. This results in high bandwidth, low latency networking with little involvement from the CPU.

In the next SNIA ESF “Great Storage Debates” series webcasts, we’ll be examining two commonly known RDMA protocols that run over Ethernet; RDMA over Converged Ethernet (RoCE) and IETF-standard iWARP. Both are Ethernet-based RDMA technologies that reduce the amount of CPU overhead in transferring data among servers and storage systems.

The goal of this presentation is to provide a solid foundation on both RDMA technologies in a vendor-neutral setting that discusses the capabilities and use cases for each so that attendees can become more informed and make educated decisions.

Join to hear the following questions addressed:

•Both RoCE and iWARP support RDMA over Ethernet, but what are the differences?
•Use cases for RoCE and iWARP and what differentiates them?
•UDP/IP and TCP/IP: which uses which and what are the advantages and disadvantages?
•What are the software and hardware requirements for each?
•What are the performance/latency differences of each?

Join our SNIA experts as they answer all these questions and more on this next Great Storage Debate

After you watch the webcast, check out the Q&A blog http://bit.ly/2OH6su8

We’re increasingly in a multi-cloud environment, with potentially multiple private, public and hybrid cloud implementations in support of a single enterprise. Organizations want to leverage the agility of public cloud resources to run existing workloads without having to re-plumb or re-architect them and their processes. In many cases, applications and data have been moved individually to the public cloud. Over time, some applications and data might need to be moved back on premises, or moved partially or entirely from one cloud to another.

That means simplifying the movement of data from cloud to cloud. Data movement and data liberation – the seamless transfer of data from one cloud to another – has become a major requirement.

In this webcast, we’re going to explore some of these data movement and mobility issues with real-world examples from the University of Michigan. Register now for discussions on:

•How do we secure data both at-rest and in-transit?
•Why is data so hard to move? What cloud processes and interfaces should we use to make data movement easier?
•How should we organize our data to simplify its mobility? Should we use block, file or object technologies?
•Should the application of the data influence how (and even if) we move the data?
•How can data in the cloud be leveraged for multiple use cases?

For years, banks have been sitting on a goldmine of customer data. Only recently have they started exploiting that, although not surprisingly for their own benefit.
Personal data can give great insights to drive bank outcomes by decreasing credit losses and reducing fraud losses. In this webinar Paul Clark, CTO, will look at how we can use customer data to;
* Drive customer’s own advantage
* Avoid slip ups
* Dodge nasty charges
* Optimise the customer’s finances end to end.

GDPR requires organizations to identify, classify, and protect personal information, but how do you prepare and protect against a possible breach if you don't know what data you have, where it lives, or how it's classified?

With the General Data Protection Regulation (GDPR) becoming enforceable in the EU on May 25, 2018, many data scientists are worried about the impact that this regulation and similar initiatives in other countries that give consumers a "right to explanation" of decisions made by algorithms will have on the field of predictive and prescriptive analytics.

In this session, Beau will discuss the role of interpretable algorithms in data science as well as explore tools and methods for explaining high-performing algorithms.

Beau Walker has a Juris Doctorate (law degree) and BS and MS Degrees in Biology and Ecology and Evolution. Beau has worked in many domains including academia, pharma, healthcare, life sciences, insurance, legal, financial services, marketing, and IoT.

Implementing AI applications based on machine learning is a significant topic for organizations embracing digital transformation. By 2020, 30% of CIOs will include AI in their top five investment priorities according to Gartner’s Top 10 Strategic Technology Trends for 2018: Intelligent Apps and Analytics. But to deliver on the AI promise, organizations need to generate good quality data to train the algorithms. Failure to do so will result in the following scenario: "When you automate a mess, you get an automated mess."

This webinar covers:

- An introduction to machine learning use cases and challenges provided by Kirk Borne, Principal Data Scientist at Booz Allen Hamilton and top data science and big data influencer.
- How to achieve good data quality based on harmonized semantic metadata presented by Andreas Blumauer, CEO and co-founder of Semantic Web Company and a pioneer in the application of semantic web standards for enterprise data integration.
- How to apply a combined approach when semantic knowledge models and machine learning build the basis of your cognitive computing. (See Attachment: The Knowledge Graph as the Default Data Model for Machine Learning)
- Why a combination of machine and human computation approaches is required, not only from an ethical but also from a technical perspective.

How are financial service firms around the world using machine learning systems today to identify and address risk in transactional datasets?

This webinar will look at a new approach to transaction analysis and illustrate how the combination of traditional rules-based approaches can be augmented with next-generation machine learning systems to uncover more in the data, faster and more efficiently.

We will span the various applications in banking, payments, trading, and compliance; looking at a variety of use cases from bank branch transaction analysis to trading data validation.

Anyone interested in financial technology, next-generation machine learning systems and the future of the financial services industry will find this webinar of specific interest.

About the speaker:
Erik McBain, CFA is a Strategic Account Manager for MindBridge Ai, where he specializes in the deployment of emerging technologies such as artificial intelligence and machine learning systems in global financial institutions and corporations. Over his 10-year career in banking and financial services(Deutsche Bank, CIBCWM, Central Banking), Erik has been immersed in the trading, analysis, and sale of financial instruments and the deployment of new payment, banking and intelligent technologies. Erik's focus is identifying the various opportunities created through technological disruption, creating partnerships, and applying a client-centered innovation process to create transformative experiences, products, and services for his clients.

Artificial Intelligence has a huge role to play in banking, no more so than in sustainable finance. However, data is very patchy and much source data is not available to inform Sustainable Finance. The challenge as we set off on this new journey is to make sure that the data and algorithms used are transparent and unbiased.

In this session, Richard Peers, Director of Financial Services industry at Microsoft will share how disruption and new entrants are bringing new business models and technology to play in banking as in other industries like the Auto Industry

One new area is sustainable Finance, a voluntary initiative as part of the COP agreement on climate change but the data to inform the markets is a challenge. Big Data, Machine Learning and AI can help resolve this.

But with such important issues at stake, this session will outline how AI much be designed to ethical principles

Tune in to this session for a high-level view of some key trends and technologies in banking. Get insight into sustainable finance; why AI can help and why Ethical AI is important; and the Microsoft principles for Ethical AI.

The tragedy of the commons, first described by biologist Garrett Hardin in 1968, describes how shared resources are overused and eventually depleted. He compared shared resources to a common grazing pasture; in this scenario, everyone with rights to the pasture acting in self-interest for the greatest short-term personal gain depletes the resource until it is no longer viable.

The banking ecosystem and the data that binds it together is not all that different. For many years, through miss-selling scandals, cookie cutter products and dumb mass-marketing have seen players acting in their own interest in accordance to what they believe the ecosystem should look like, how it should evolve and who controls it.

But with the introduction of open banking, there are signs that new banking ecosystems are set to thrive. Taking Hardin’s notion, collaboration in the open banking future could benefit everyone in the ecosystem – the traditional banks, the FinTechs, the tech titans with their expertise in delivering services at scale, and yet-to-be-defined participants, likely to include the large data players such as energy firms, retailers and telcos.

You won’t want to miss the opportunity to hear leading data storage experts provide their insights on prominent technologies that are shaping the market. With the exponential rise in demand for high capacity and secured storage systems, it’s critical to understand the key factors influencing adoption and where the highest growth is expected. From SSDs and HDDs to storage interfaces and NAND devices, get the latest information you need to shape key strategic directions and remain competitive.

You can do a lot with a Raspberry and ASF projects. From a tiny object
connected to the internet to a small server application. The presentation
will explain and demo the following:

- Raspberry as small server and captive portal using httpd/tomcat.
- Raspberry as a IoT Sensor collecting data and sending it to ActiveMQ.
- Raspberry as a Modbus supervisor controlling an Industruino
(Industrial Arduino) and connected to ActiveMQ.

The 10x growth of transaction volumes, 50x growth in data volumes and drive for real-time visibility and responsiveness over the last decade have pushed traditional technologies including databases beyond their limits. Your choices are either buy expensive hardware to accelerate the wrong architecture, or do what other companies have started to do and invest in technologies being used for modern hybrid transactional analytical applications (HTAP).

Learn some of the current best practices in building HTAP applications, and the differences between two of the more common technologies companies use: Apache® Cassandra™ and Apache® Ignite™. This session will cover:

- The requirements for real-time, high volume HTAP applications
- Architectural best practices, including how in-memory computing fits in and has eliminated tradeoffs between consistency, speed and scale
- A detailed comparison of Apache Ignite and GridGain® for HTAP applications

About the speaker: Denis Magda is the Director of Product Management at GridGain Systems, and Vice President of the Apache Ignite PMC. He is an expert in distributed systems and platforms who actively contributes to Apache Ignite and helps companies and individuals deploy it for mission-critical applications. You can be sure to come across Denis at conferences, workshop and other events sharing his knowledge about use case, best practices, and implementation tips and tricks on how to build efficient applications with in-memory data grids, distributed databases and in-memory computing platforms including Apache Ignite and GridGain.

Before joining GridGain and becoming a part of Apache Ignite community, Denis worked for Oracle where he led the Java ME Embedded Porting Team -- helping bring Java to IoT.

When monitoring an increasing number of machines, the infrastructure and tools need to be rethinked. A new tool, ExDeMon, for detecting anomalies and raising actions, has been developed to perform well on this growing infrastructure. Considerations of the development and implementation will be shared.

Daniel has been working at CERN for more than 3 years as Big Data developer, he has been implementing different tools for monitoring the computing infrastructure in the organisation.

As data analytics becomes more embedded within organizations, as an enterprise business practice, the methods and principles of agile processes must also be employed.

Agile includes DataOps, which refers to the tight coupling of data science model-building and model deployment. Agile can also refer to the rapid integration of new data sets into your big data environment for "zero-day" discovery, insights, and actionable intelligence.

The Data Lake is an advantageous approach to implementing an agile data environment, primarily because of its focus on "schema-on-read", thereby skipping the laborious, time-consuming, and fragile process of database modeling, refactoring, and re-indexing every time a new data set is ingested.

With new technologies such as Hive LLAP or Spark SQL, do you still need a data warehouse or can you just put everything in a data lake and report off of that? No! In the presentation, James will discuss why you still need a relational data warehouse and how to use a data lake and an RDBMS data warehouse to get the best of both worlds.

James will go into detail on the characteristics of a data lake and its benefits and why you still need data governance tasks in a data lake. He'll also discuss using Hadoop as the data lake, data virtualization, and the need for OLAP in a big data solution, and he will put it all together by showing common big data architectures.

You've got data. It's time to manage it. Find information here on everything from data governance and data quality, to master and metadata management, data architecture, and the thing that was just invented ten seconds ago.