Run Operational Applications on Hadoop using Splice Machine and Hortonworks

Retailers are interested in capturing growing volumes of data to deliver value to their organizations – whether through increased revenue and margin, improved loyalty and consumer intimacy, or back-office optimization of IT processes. As a whole, the Retail and Consumer Packaged Goods industries are realizing that Big Data solutions like Hortonworks Hadoop can provide business value, reduce cost, and improve consumer insight.

Join our speakers Eric Thorsen and Krishnan Parasuraman and learn how Hortonworks and Splice Machine can be leveraged to support Systems of Record and the growing field of big data and Hadoop.

Hortonworks Data Cloud for Amazon Web Services is a new product offering from Hortonworks that is delivered and sold via the AWS Marketplace. It allows you to start analyzing and processing vast amounts of data quickly. Powered by the Hortonworks Data Platform, Hortonworks Data Cloud is an easy-to-use and cost-effective solution for handling big data use cases with Apache Hadoop, Hive, and Spark.

Join us on Dec. 15, 2016 to learn more about the product and to see a live demo by Sean Roberts. You’ll see how to quickly deploy Apache Spark and Apache Hive clusters for processing and analyzing data in the cloud.

The integration of Apache Ranger and Apache Atlas is driving the creation of a classification-based access security policies in Hadoop. The Atlas project is a data governance and metadata framework for the Hadoop platform that includes contributions from end users and vendors. The Ranger project provides central security policy administration, auditing, authorization, authentication, and data encryption for Hadoop ecosystem projects such as HDFS, Hive, HBase, Storm, Solr, Kafka,NiFi and YARN. When Atlas and Ranger are used together, system administrators can define flexible metadata tag-based security policies to protect data in real-time.

In this month’s Partnerworks Office Hours we will discuss the Security & Governance capabilities of HDP 2.5 with integration Apache Ranger & Apache Atlas. As usual we will leave plenty of time for your questions & further discussion.

Student performance data is increasingly being captured as part of software-based and online classroom exercises and testing. This data can be augmented with behavioral data captured from sources such as social media, student-professor meeting notes, blogs, student surveys, and so forth to discover new insights to improve student learning. The results transcend traditional IT departments to focus on issues like retention, research, and the delivery of content and courses through new modalities.

Hortonworks is partnering with Microsoft to show you how the Hortonworks Data Platform (HDP) running on the Microsoft stack enables you to develop a “single view of a student”.

Hortonworks Data Cloud for Amazon Web Services is a new product offering from Hortonworks that is delivered and sold via the AWS Marketplace. It allows you to start analyzing and processing vast amounts of data quickly. Powered by the Hortonworks Data Platform, Hortonworks Data Cloud is an easy-to-use and cost-effective solution for handling big data use cases with Apache Hadoop, Hive, and Spark.

Join us on Dec. 7, 2016 to learn more about the product and to see a live demo by Jeff Sposetti, Senior Director of Product Management. You’ll see how to quickly deploy Apache Spark and Apache Hive clusters for processing and analyzing data in the cloud.

Part five in a five-part series, this webcast will be a demonstration of the integration of Apache Zeppelin and Pivotal HDB. Apache Zeppelin is a web-based notebook that enables interactive data analytics. You can make beautiful data-driven, interactive and collaborative documents with SQL, Scala and more. This webinar will demonstrate the configuration of the psql interpreter and the basic operations of Apache Zeppelin when used in conjunction with Hortonworks HDB.

Delivering Data-Driven Applications at the Speed of Business: Global Banking AML use case.

Chief Data Officers in financial services have unique challenges: they need to establish an effective data ecosystem under strict governance and regulatory requirements. They need to build the data-driven applications that enable risk and compliance initiatives to run efficiently. In this webinar, we will discuss the case of a global banking leader and the anti-money laundering solution they built on the data lake. With a single platform to aggregate structured and unstructured information essential to determine and document AML case disposition, they reduced mean time for case resolution by 75%. They have a roadmap for building over 150 data-driven applications on the same search-based data discovery platform so they can mitigate risks and seize opportunities, at the speed of business.

Apache MiNiFi is designed to make it practical to enable data collection from the second it is born, ideal for IoT scenarios where there are a large number connected devices or a need for a smaller and more streamlined footprint than Apache NiFi. Join us as we share a use case and demo of Apache MiNiFI, and how it can enable edge data collection from Raspberry PI’s attached to weatherproof antennas distributed all over the world.

Customers are preparing themselves to analyze and manage an increasing quantity of structured and unstructured data. Business leaders introduce new analytical workloads faster than what IT departments can handle. Legacy IT infrastructure needs to evolve to deliver operational improvements and cost containment, while increasing flexibility to meet future requirements. By providing HDP on IBM Power Systems, Hortonworks and IBM are giving customers have more choice in selecting the appropriate architectural platform that is right for them. In this webinar, we’ll discuss some of the challenges with deploying big data platforms, and how choosing solutions built with HDP on IBM Power Systems can offer tangible benefits and flexibility to accommodate changing needs.

Rapid data growth from a wide range of new data sources is significantly outpacing organizations’ abilities to manage data with existing systems. Today’s data architectures and IT budgets are straining under the pressure. In response, the center of gravity in the data architecture is shifting from structured transactional systems to cloud based modern data architectures and applications; with Hadoop at it's core.

Join this live and on-demand video panel as they discuss how the landscape is changing and offer insights into how organizations are successfully navigating this shift to capture new business opportunities while driving cost out.

Organizations today are looking to exploit modern DataArchitectures that combine the power and scale of Big Data Hadoop platforms with operational data from their Transactional Systems. In order to react to situations in an agile manner in real-time, low-latency
access to data is essential.

Hortonworks and Oracle can provide comprehensive solutions that allow organisations to respond rapidly to data events.

During this webinar we will cover:

- How Oracle GoldenGate empowers organisations to capture,route, and deliver transactional data from Oracle and non-Oracle databases for ingestion in real-time to HDP :registered:.

Today’s European financial markets hardly resemble the ones from 15 years ago. The high speed of electronic trading, explosion in trading volumes, the diverse range of instruments classes and a proliferation of trading venues pose massive challenges. With all this complexity, market abuse patterns have also become egregious. Banks are now shelling out millions of euros in fines for market abuse violations. In response to this complex world, European regulators thus have been hard at work.

In this webinar, we will discuss how compliance teams are fighting back with Big Data and trying to stay out of regulatory hot water. Rapid response to suspect trades means compliance teams need to access and visualize trade patterns, real time and historic data, and be able to efficiently perform trade reconstruction at any point in time.

Join Hortonworks and Arcadia Data for this live webinar on 22 November at 14:00 GMT, where we’ll cover the use case at a Top 25 Global Bank who now has deep forensic analysis of trade activity.

The fourth Industrial revolution is here, and competing to succeed in the 4.0 ‘digital’ world entails making the right decisions based on data driven pointers, to successfully implement your strategy. As we work with the entire stack of Fortune 100 organizations, we often see companies—particularly those operating across business lines with complex lines of businesses and/ or looking to break into new markets or asset classes— struggle to answer two questions:

‘How to innovate’ using the rich trove of data that they already have
‘Which systems and processes to renovate’ to get the best value for their investment

Hortonworks Big Data Maturity Scorecard fills the knowledge gap. It gives you a better understanding of the opportunities unique to your business, and helps comprehend ‘how the maturity of your organization enables or inhibits your ability to strategically pursue big-data enabled business transformation programs aligned to your business goals. Join this webinar to learn more.

Part four in a five-part series, this webcast will be a demonstration of the installation of Apache MADlib (incubating), an open source library for scalable in-database analytics, into Hortonworks HDB. MADlib is an open-source library for scalable in-database analytics. It provides data-parallel implementations of mathematical, statistical and machine learning methods for structured and unstructured data. This webinar will demonstrate the installation procedures, as well as some basic machine learning algorithms to verify the install.

Streaming Analytics are the new normal. Customers are exploring use cases that have quickly transitioned from batch to near real time. Hortonworks Data Flow / Apache NiFi and Isilon provide a robust scalable architecture to enable real time streaming architectures. Explore our use cases and demo on how Hortonworks Data Flow and Isilon can empower your business for real time success.

Johnson Controls delivers best-in-class building technologies and energy storage. In their quest to continually improve operations, they implemented a modern data architecture based on Hadoop. They started with a small successful proof of concept and recognized the need to make more of their data accessible to more teams. Johnson Controls was able to successfully integrate Big Data technology into more of their operations that ultimately supports their goals to create better products, reduce waste, and increase profitability.

Join this webinar where our guest speaker from Johnson Controls shares their journey from a small Hadoop POC to a global production-ready implementation that includes security and governance.

You will hear:
•What steps JCI took throughout the process
•What framework and technologies they used
•How they engaged executives for full corporate buy-in
•Issues they encountered and solved
•How JCI was able to provide a secure reliable platform to consolidate global data
•Their next steps to complete the global integration.

Apache NiFi, Storm and Kafka augment each other in modern enterprise architectures. NiFi provides a coding free solution to get many different formats and protocols in and out of Kafka and compliments Kafka with full audit trails and interactive command and control. Storm compliments NiFi with the capability to handle complex event processing.

Join us to learn how Apache NiFi, Storm and Kafka can augment each other for creating a new dataplane connecting multiple systems within your enterprise with ease, speed and increased productivity.

Adoption of a modern data platform is a journey. Every step requires different levels of technology, people and process capabilities. A reliable services partner with deep expertise is key for your success at each step of the way. Hortonworks service model is designed to provide expertise needed at each step of your adoption journey. We defined our offerings to address unique needs at each level.

Hortonworks IAM Services (Implementation, Advisory, and Managed Services) are delivered by our global professional services consultants, to help you succeed with the adoption of connected data platforms. Hortonworks IAM services are based on proven methodologies that are developed by our experts in collaboration with product management, and committers from our R&D teams.

Apache MiNiFi is designed to make it practical to enable data collection from the second it is born, ideal for IoT scenarios where there are a large number connected devices or a need for a smaller and more streamlined footprint than Apache NiFi. Join us as we walk through how Apache MiNiFI works, and how it can enable edge data collection from the likes of connected cars, log services, Raspberry PI’s and more.

Hadoop and The Internet of Things has enabled data driven companies to leverage new data sources and apply new analytical techniques in creative ways that provide competitive advantage. Beyond clickstream data, companies are finding transformational insights stemming from machine data and telemetry that are radically improving operational efficiencies and yielding new actionable customer insights.

During this webinar we will:
- Discuss real world case studies from the field across a variety of verticals
- Describe the strategies, architectures, and results achieved by Fortune 500 organisations
- Outline the best practices on how to improve your operational efficiency

Founded in 2011 by 24 engineers from the original Yahoo! Hadoop team, Hortonworks has amassed more Hadoop experience under one roof than any other organization in the world. Our team works every day to enhance the Hadoop core, creating new code to improve the open product we have stewarded since its inception. Simply put, we are your best choice to support you in your Hadoop journey.