Vision and Visualization: Practical Wisdom from Research in Human Vision

In data visualization, we map data values and relationships onto visual dimensions to create a graphical representation for exploration and analysis. How can we best use the power of the human visual system to make these values and relationships clear? Using examples from information design, cartography and data graphics, we will demonstrate how insights from research in color perception, perceptual organization and visual attention have helped define best practices for visual analysis.

You will learn how to utilize a few perceptual and cognitive building blocks that can inform a wide variety of visualization choices, and to demonstrate how these influenced the design of the Tableau product.

Any organization that takes a moment to study the data on their primary storage system will quickly realize that the majority (as much as 90 percent) of data that is stored on it has not been accessed for months if not years. Moving this data to a secondary tier of storage could free up massive amount of capacity, eliminating a storage upgrade for years. Making this analysis frequently is called data management, and proper management of data can not only reduce costs it can improve data protection, retention and preservation.

Is it worth it for companies to spend millions of dollars a year on software that can't keep up with constantly evolving open source software? What are the advantages and disadvantages to keeping enterprise licenses and how secure is open source software really?

Join Data Society CEO, Merav Yuravlivker, as she goes over the software trends in the data science space and where big companies are headed in 2017 and beyond.

About the speaker: Merav Yuravlivker is the Co-founder and Chief Executive Officer of Data Society. She has over 10 years of experience in instructional design, training, and teaching. Merav has helped bring new insights to businesses and move their organizations forward through implementing data analytics strategies and training. Merav manages all product development and instructional design for Data Society and heads all consulting projects related to the education sector. She is passionate about increasing data science knowledge from the executive level to the analyst level.

So you’ve decided you want to jump on the data analytics bandwagon and propel your company into the 21st century with better analytics, reporting and data visualization. But to get a BI project rolling you usually need the entire organization, or at the very least the entire department, to get on board. Since embarking on a BI initiative requires an investment of time and resources, convincing the relevant people in the company to take the leap is imperative. You’ll need to construct a solid business case, defend your budget request and prove the value BI can bring to your organization.

In this webinar you’ll discover:

- Why organizations need to invest in BI to begin with
- How are organization deriving value from BI
- How to build an internal business case for investing in BI
- What are the intricacies of how to build a budget
- How to drive your company to a purchasing decision
- How to start realizing value from BI now

Now that you have become acquainted with basic container technologies and the associated storage challenges in supporting applications running within containers in production; let’s take a deeper dive into what differentiates this technology from what you are used to with virtual machines. Containers can both complement virtual machines and also replace them as they promise the ability to scale exponentially higher. They can easily be ported from one physical server to another or to one platform—such as on-premise—to another—such as public cloud providers like Amazon AWS. In this Webcast, we’ll explore “container best practices” that discuss how to address the various challenges around networking, security and logging. We’ll also look at what types of applications more easily lend themselves to a microservice architecture versus which applications may require additional investment to refactor/re-architect to take advantage of microservices.

Today, data is everywhere. As more data streams into cloud-based systems, the combination of data and computing resources gives us today the unprecedented opportunity to perform very sophisticated data analysis and to explore advanced machine learning methods such as deep learning.

Clouds pack very large amount of computing and storage resources, which can be dynamically allocated to create powerful analytical environments. By accessing those analytics clusters of machines, data analysts and data scientists can quickly evaluate more hypotheses and scenarios in parallel and cost-effectively.

The number of analytical tools which is supported on various clouds is increasing by the day. The list of analytical tools spans from traditional rdms databases as provided by vendors to analytics open sources projects such as Hadoop Hive, Spark, H2O. Next to provisioning tools and solutions on the cloud, managed services for Data Science, Big Data and Analytics are becoming a popular offering of many clouds.

Analytics in the cloud provides whole new ways for data analysts, data scientists and business developer to interact with each other, share data and experiments and develop relevant insight towards improved business processes and results. In this talk, I will describe a number of data analytics solutions for the cloud and how they can be added to your current cloud and on-premise landscape.

The classic unimodal data warehouse architecture has expired because it is restricted to primarily supporting structured data but not the newer data types such as social, streaming, and IoT data. New BI architecture, such as “logical data warehouse”, is required to augment the traditional and rigid unimodal data warehouse systems with a new bimodal data warehouse architecture to support requirements that are experimental, flexible, explorative, and self-service oriented.

Learn from the Logical Data Warehousing expert, Rick van der Lans, about how you can implement an agile data strategy using a bimodal Logical Data Warehouse architecture.
In this webinar, you will learn:

Organizations, already awash in customer data, know geospatial capabilities can put a new “lens”on existing reports. Data from smartphones, GPS devices and social media has organizations anxious to factor in customer location, origin or destination, with time or day.

Join IBM Product Marketing Manager David Clement and IBM Senior Product Manager Rick Blackwell and explore the new, world-class mapping and geospatial capabilities for IBM Cognos Analytics and Watson Analytics. Discover how you can add geographic dimension to visualizing critical business information in reports and dashboards in Cognos Analytics.

Traditional report factories are rapidly becoming obsolete. Enterprise organizations are shifting to self-service analytics and looking for a sustainable, yet long-term approach to governance that satisfies the needs of both the business and IT.

The Business needs real-time access to data to drive critical decisions. IT needs to audit and manage data to ensure it’s accurate, secure, and governed to scale.

With only eight percent of people in traditional organizations able to both ask and answer their own questions, it’s time to take a closer look at your analytics strategy.

Join this webinar to take a closer look at enterprise analytics and learn how:
· Visual data analysis brings speed, value, accuracy, collaboration and leads to culture of analytics

· Modern enterprises are eliminating boundaries between IT and the business

· Shifting to enterprise self-service analytic tools empowers both the business and IT

Predictive Analytics and the study of Big Data has helped many institutions to detect fraudulent practices before they become a hazard to the business. This is especially evident in the Financial Services sector where deploying an efficient prevention and detection strategy is of utmost importance.

Join this panel where experts will discuss:
-Which analytics to look at to stop fraudulent payments in real-time
-Using trends and behavioural analytics to detect anomalies
-How to implement a holistic strategy that's right for your organisation
-The challenges in maintaining compliance standards
-Use cases and applications of analytics to prevent financial crime

-The value of Big Data and which skills are required to deliver that value
-How to get started with Big Data projects
-What to do if progress is limited
-Business opportunities around customer insight, supply chain analytics, and more

The duo will discuss a successful case study on data-driven decision making.

They will tackle:
-How to implement data solutions quickly and efficiently in the cloud
-What are the challenges of data-driven decision making?
-How to discover data pain-points across an organisation and solve these accurately
-The importance of real-time analytics in generating actionable insights

-Moving beyond dashboards and applying the “5 Whys” technique to data
-Best practice tips for exploring and manipulating data
-The need to think about “data exploration” as a task in itself, but as part of a person’s goal to make an impact on their business

Technical debt is a common challenge which makes rigorously testing evolving applications near impossible. Faced with minimal documentation and no subject matter expertise, the data which goes in and out of a system can be harnessed, using rule-learning to reverse-engineer a functional model complex systems, driving efficient, effective testing.

- As the founder of Trifacta, tell us a bit about your company and just what is data wrangling?
- How does it differ from ETL?
- You have just announced a new server edition of Trifacta, can you tell us more this?
- Can you give us some examples of how your customers are leveraging Big Data?
- What makes a big data project successful?
- What advice would you give to companies starting out with a big data project?
- What are the biggest hurdles to overcome?
- What use cases are the most prevalent at the moment and will that change over time?

1) What are some of the challenges data professionals face when developing their own cloud applications?
2) How important is it to provide end-users with dealing with real-time insights?
3) Why is your database choice critical for transforming customer experience?
4) How have customer expectations changed in the past 5 years?

Charlie will discuss:
-Do search engines and Big Data systems share any history?
-How can search engines be used to make sense of Big Data?
-What are the options available for those wanting to add full-text search to their Big Data stack?
-Why is open source search a better choice than a closed, commercial alternative?

We all are aware of the challenges enterprises are having with growing data and silo’d data stores. Businesses are not able to make reliable decisions with un-trusted data and on top of that, they don’t have access to all data within and outside their enterprise to stay ahead of the competition and make key decisions for their business.

This session will take a deep dive into current Healthcare challenges businesses are having today, as well as, how to build a Modern Data Architecture using emerging technologies such as Hadoop, Spark, NoSQL datastores, MPP Data stores and scalable and cost effective cloud solutions such as AWS, Azure and BigStep.

Past infrastructures provided compute, storage and network enabling static enterprise deployments which changed every few years. This talk will analyze the consequences of a world where production SAP and Spark clusters including data can be provisioned in minutes with the push of a button.

What does it mean for the IT architecture of an enterprise? How to stay in control in a super agile world?

Businesses are extracting value from more data, more sources and at increasingly real-time rates. Spark and HANA are just the beginning. This webcast details existing and emerging solutions for in-memory computing solutions that address this market trend and the disruptions that happen when combining big-data (Petabytes) with in-memory/real-time requirements., It provides an overview and trade-offs of key solutions (Hadoop/Spark, Tachyon, Hana, NoSQL-in-memory, etc) and related infrastructure (DRAM, Nand, 3D-crosspoint, NV-DIMMs, high-speed networking) and discusses the disruption to infrastructure design and operations when "tiered-memory" replaces "tiered storage"