What Machine Learning can do for you (and what it cannot)

Machine Learning has already started to show its potential and will continue to grow in importance, while at the same time, it is often oversold as its pitfalls and costs are not sufficiently emphasised.

This talk will attempt to find a balance between these two impulses. In particular, after a brief overview of machine learning modes, Luis will look at topics such as:

-How good UX design can amplify or mitigate the failures of the ML
component of the system
-How expectation management is important (why do we accept the need to train people, but not computers?)
-Why domain knowledge is almost always necessary, but it's rarely
sufficient
-Why there will still be a need for data scientists in 20 years (rather
than they, themselves, also being "automated away" by machine learning)

With new technologies such as Hive LLAP or Spark SQL, do you still need a data warehouse or can you just put everything in a data lake and report off of that? No! In the presentation, James will discuss why you still need a relational data warehouse and how to use a data lake and an RDBMS data warehouse to get the best of both worlds.

James will go into detail on the characteristics of a data lake and its benefits and why you still need data governance tasks in a data lake. He'll also discuss using Hadoop as the data lake, data virtualization, and the need for OLAP in a big data solution, and he will put it all together by showing common big data architectures.

Watson is a computer system capable of answering questions posed in natural language. Watson was named after IBM's first CEO, Thomas J. Watson. The computer system was specifically developed to answer questions on the quiz show Jeopardy! (where it beat its human competitors) and was then used in commercial applications, the first of which was helping with lung cancer treatment.

NetApp is now using IBM Watson in Elio, a virtual support assistant that responds to queries in natural language. Elio is built using Watson’s cognitive computing capabilities. These enable Elio to analyze unstructured data by using natural language processing to understand grammar and context, understand complex questions, and evaluate all possible meanings to determine what is being asked. Elio then reasons and identifies the best answers to questions with help from experts who monitor the quality of answers and continue to train Elio on more subjects.

Elio and Watson represent an innovative and novel use of large quantities of unstructured data to help solve problems, on average, four times faster than traditional methods. Join us at this webcast, where we’ll discuss:

AI is changing the way organizations do businesses and how they interact with customers. AI continues to drive the change. Deep Learning and Natural Language Processing will become standards in AI solutions. Deep Learning is based on brain simulations and uses deep neural networks. AlphaGo is the first AI system to defeat a professional human Go player, the first program to defeat a Go world champion, and arguably the strongest Go player in history. Baidu improved speech recognition from 89% to 99% using Deep Learning. Every AI and Machine learning scientist is required to know Deep Learning tools in his / her current job scenario.

In this session, we will be discussing what is Deep Learning and why it is gaining popularity. We will explain AI solutions using Deep Learning with a practical example. Deep Learning has an edge over other machine learning techniques as with the increased volume of data, performance increases with Deep Learning. Further, Deep Learning enables Hierarchical Feature Learning i.e. learning feature hierarchies.

In analytical reporting, often the data and presentation of it are perfect, but the data story falls flat. Looking to storytelling techniques from Hollywood, one can effectively drive home the point of their data and take their data visualizations to the next level.

Join Ted Frank, Principal at Backstories Studio for this webinar as he shows you the quick wins in storytelling, so right away, you can have your stakeholders understanding more and eager, on the edge of their seats. It starts with finding your key story and staying out of the weeds, then how to visualize it, and finally, how to deliver it so you get heard and make a bigger difference.

You can reach Ted at the following:
- ted@backstories.tv
- www.backstories.tv

Discover his book Get to the Heart below:
- ted@getotheheartbook.com
- www.gettotheheartbook.com

John Grimwade, Assistant Professor, School of Visual Communication, Ohio University

How do we cross the gap from complex information to a general audience? Democratizing data visualization is perhaps our greatest challenge. There’s an incredible amount of data viz around, but unfortunately most of it does not communicate clearly. At least not to the great majority of the public. It often seems cold and elitist, almost as if it’s designed for people with some training in understanding data. How do we improve this situation? First, we recognize the audience we’re trying to reach, and then we can use well-proven journalistic principles to build a visual presentation that connects with them.

John’s blog: “Infographics for the People” promotes this same message for all forms of visual communication.

In this presentation, he will explore new approaches that can unleash the enormous potential of data-driven storytelling to inform and empower the general public.

Bio:

John teaches informational graphics to undergraduates and graduates in the School of Visual Communication at Ohio University. John has won 80 major infographic awards, and run workshops and lectured in many countries. He is currently infographics director for Eight by Eight, a soccer magazine that was the 2015 Magazine of the Year for design. In March 2017, an exhibition of his complete career (sponsored by Adobe), which covered both analog and digital visual explanations, was held at the INCH conference in Munich. This exhibition will be staged again at Ohio University in October this year.

In this talk we will see whether we are building our first product or revamping an existing one, Embedded Analytics can help us solve real customer problems, which builds product value and creates a competitive differentiator to propel our business forward.

Additionally, we'll deeply look into how Embedded Analytics is different from Traditional Business Intelligence and what are the factors/trends driving Embedded Analytics.

Selling your house in the financial crisis-stricken Greece is up to this day a great ordeal. When faced with such a challenge, I was baffled by the sparsity of conclusive data on land value at my birthplace city, Thessaloniki. Embarking on a personal mission and collecting and processing more than 10K online housing ads together with open data, I managed to render an insightful interactive visualization of the actual real estate values on borough and city block level that was published through the Greek media. Join me on this thought process journey to find out how to

It can be hard to keep up with the rapidly changing BI landscape. But it doesn't have to be. Reserve your spot at Qlik's annual BI Trends Webinar.

In this global webinar live replay, we’ll reveal the top BI Trends for the coming year and how they can help you transform your data. Join Qlik’s Global Market Intelligence lead and former Gartner analyst Dan Sommer to learn why 2018 is the year for the “desilofication of data.”

Recent events like the Equifax data leak and new regulations like the EU's General Data Protection Regulation have increased the urgency for further change in the BI landscape and to move data out of silos.

What is the right strategy and framework?
How can you easily move from "all data," to "combinations of data," to "data insights"?
Can data literacy and augmented intelligence create a data-driven culture?
The volume of data available to decision makers continues to be massive, and is growing faster than our ability to consume it. Learn how to move your data out of silos and turn your data into insights.

RIDE supports developing in notebooks, editor, RMarkdown, shiny app, Bokeh and other frameworks. Supported by R-Brain’s optimized kernels, R and Python 3 have full language support, IntelliSense, debugger and data view. Autocomplete and content assistant are available for SQL and Python 2 kernels. Spark (standalone) and Tesnsorflow images are also provided.

Using Docker in managing workspaces, this platform provides an enhanced secure and stable development environment for users with a powerful admin control for controlling resources and level of access including memory usage, CPU usage, and Idle time.

The latest stable version of IDE is always available for all users without any need of upgrading or additional DevOps work. R-Brain also delivers customized development environment for organizations who are able to set up their own Docker registry to use their customized images.

The RIDE Platform is a turnkey solution that increases efficiency in your data science projects by enabling data science teams to work collaboratively without a need to switch between tools. Explore and visualize data, share analyses, all in one IDE with root access, connection to git repositories and databases.

Hear from our expert panel how they built a strong analytics culture in their organisations to enable data-driven decision-making.

We have invited Paul Banoub (UBS, United Kingdom), Emma Whyte (The Information Lab, United Kingdom), Simon Beaumont (NHS), and Josh Tapley (ComCast, United States) to discuss the following topics with us:
-How to find talented people and how to keep them engaged, challenged and motivated
-How to establish the right environment with processes and systems that foster innovation, learning, collaboration and analytical excellence
-How to setup best practices and governance while staying responsive to the organisation's need for information and insights RIGHT NOW
-How to make self-service analytics a success

During the panel discussion you have the chance to ask questions and get answers from our experts.

Presenters: Andy Kriebel, Head Coach at The Data School & Eva Murray, Head of BI and Tableau Evangelist at Exasol
Panel: Paul Banoub (Director, Analytics as a Service at UBS Investment Bank), Emma Whyte (Head of Centre of Excellence and Customer Advocacy at The Information Lab), Josh Tapley (Director, Data Visualization at Comcast)

IT is a key player in the digital and cognitive transformation of business processes delivering solutions for improved business value with analytics. This session will step by step explain the journey to secure production while adopting new analytics technologies leveraging mainframe core business assets

Unlocking the data’s true value is a challenge, but there are a range of tools and techniques that can help. This live discussion will focus on the data analytics landscape; compliance considerations and opportunities for improving data utility in 2018 and beyond.

Mixed reality is the result of blending the physical world with the digital world. Though it is relatively new technology and its adoption is still in initial stages. Mixed Reality devices and applications are projected to be the next technological era after smart phones.

The webinar will give a brief on Mixed Reality Potential Usecases those provide an immersive experience but also revenues streams to the creators.

Data Scientists are rare and highly valued individuals, and for good reason: making sense of data, and using the machine learning libraries requires an unusual blend of advanced skills. Why is it then that Data Scientists spend the majority of their time getting data ready for models, and a fraction actually doing the high value work?

In this talk we introduce the concept of Data Fabric, a new way to provide a self-service model for data, where data scientists can easily discover, curate, share, and accelerate data analysis using Python, R, and visualization tools, no matter where the data is managed, no matter the structure, and no matter the size.

We will talk through the role of Apache Arrow, the in-memory columnar data standard that is accelerating analytics for GPU-based processing, as well as the role of Pandas and Arrow in providing unprecedented speed in accessing datasets from Python.

Vivit launches their first ever SIG Talk event with three speakers who will give you insights into StormRunner Functional, Dynamic data handling in performance tests as well as AI and machine learning as it applies to testing.

We will discuss:

•How StormRunner Functional is the latest functional test offering from Micro Focus. In this session, Chris Trimper, an early adopter and beta tester, will share his experiences with this new product

•How Virtual Table Server (VTS), in days of Mercury Interactive, was the often-overlooked repository for dynamically updated real-time test data. Even after a revamp and relaunch in 2014, many people still don't use it, but they should. Richard's VTS demo will help you to get value from this great add-on for LoadRunner and Performance Center

•How Big Data, Artificial Intelligence and Machine learning are rapidly impacting businesses and customers, enabling another massive shift through technology enablement. In this session, Todd DeCapua will share how these capabilities are being leveraged in Performance Engineering now, and into the future

Join us for the next Quality & Testing SIG Talk on Tuesday, January 9, 2018: http://www.vivit-worldwide.org/events/EventDetails.aspx?id=1041157&group=.

We will discuss how Big Data, Artificial Intelligence and Machine learning are rapidly impacting businesses and customers, enabling another massive shift through technology enablement. Todd DeCapua will share how these capabilities are being leveraged in Performance Engineering now, and into the future.

Join us for the next Quality & Testing SIG Talk on Tuesday, January 9, 2018: http://www.vivit-worldwide.org/events/EventDetails.aspx?id=1041157&group=.

Peter Bruce, President and Founder, The Institute for Statistics Education at Statistics.com

Artificial Intelligence (AI) is a hot topic, and there is widespread alarm that AI will replace humans in the analytical process. Adam Selipsky, the CEO of Tableau, terms this a myth, and said recently that AI's role will remain that of an assistant to the analytics professional.

In this talk we go beyond that, and look at some interesting aspects of the human role as an integral component of machine learning and statistical modeling.

We discuss how human expertise "supervises" machine learning, how reliance on multiple sources can deliver surprising expertise, and when that system can go wrong.

This session explores what artificial intelligence can add to data analysis in the social housing sector. Is it better than traditional statistical techniques?

HouseMark is the leading provider of data analysis and insight solutions to social housing providers. Illumr took part in HouseMark’s accelerator programme introducing innovative technology start-ups to the social housing sector. HouseMark have worked with illumr to test out the appetite for AI and explore what insights can be gained using data typically held by social housing providers.

Data is the foundation of any organization and therefore, it is paramount that it is managed and maintained as a valuable resource.

Subscribe to this channel to learn best practices and emerging trends in a variety of topics including data governance, analysis, quality management, warehousing, business intelligence, ERP, CRM, big data and more.