Apache HBase

OVERVIEW

A non-relational (NoSQL) database that runs on top of HDFS

Apache HBase is an open source NoSQL database that provides real-time read/write access to those large datasets.

HBase scales linearly to handle huge data sets with billions of rows and millions of columns, and it easily combines data sources that use a wide variety of different structures and schemas. HBase is natively integrated with Hadoop and works seamlessly alongside other data access engines through YARN.

What HBase Does

Apache HBase provides random, real time access to your data in Hadoop. It was created for hosting very large tables, making it a great choice to store multi-structured or sparse data. Users can query HBase for a particular point in time, making “flashback” queries possible. These following characterisitcs make HBase a great choice for storing semi-structured data like log data and then providing that data very quickly to users or applications integrated with HBase.

Characteristic

Benefit

Fault tolerant

Replication across the data center

Atomic and strongly consistent row-level operations

High availability through automatic failover

Automatic sharding and load balancings of tables

Fast

Near real time lookups

In-memory caching via block cache and bloom filters

Server side processing via filters and co-processors

Usable

Data model accommodates wide range of use cases

Metrics exports via File and Ganglia plugins

Easy Java API as well as Thrift and REST gateway APIs

Enterprises use Apache HBase’s low latency storage for scenarios that require real-time analysis and tabular data for end user applications. One company that provides web security services maintains a system accepting billions of event traces and activity logs from its customer’ desktops every day. The company’s programmers can tightly integrate their security solutions with HBase (to assure that the protection they provide keeps pace with real-time changes in the threat landscape.)

Another company provides stock market ticker plant data that its users query more than thirty thousand times per second, with an SLA of only a few milliseconds. Apache HBase provides that super low-latency access over an enormous, rapidly changing data store.

How HBase Works

HBase scales linearly by requiring all tables to have a primary key. The key space is divided into sequential blocks that are then allotted to a region. RegionServers own one or more regions, so the load is spread uniformly across the cluster. If the keys within a region are frequently accessed, HBase can further subdivide the region by splitting it automatically, so that manual data sharding is not necessary.

ZooKeeper and HMaster servers make information about the cluster topology available to clients. Clients connect to these and download a list of RegionServers, the regions contained within those RegionServers and the key ranges hosted by the regions. Clients know exactly where any piece of data is in HBase and can contact the RegionServer directly without any need for a central coordinator.

Forums

HBase Tutorials

Try these Tutorials

Introduction Hadoop has always been associated with BigData, yet the perception is it’s only suitable for high latency, high throughput queries. With the contribution of the community, you can use Hadoop interactively for data exploration and visualization. In this tutorial you’ll learn how to analyze large datasets using Apache Hive LLAP on Amazon Web Services […]

A very common request from many customers is to be able to index text in image files; for example, text in scanned PNG files. In this tutorial we are going to walkthrough how to do this with SOLR. Prerequisites Download the Hortonworks Sandbox Complete the Learning the Ropes of the HDP Sandbox tutorial. Step-by-step guide […]

Introduction In this tutorial, you will learn about the different features available in the HDF sandbox. HDF stands for Hortonworks DataFlow. HDF was built to make processing data-in-motion an easier task while also directing the data from source to the destination. You will learn about quick links to access these tools that way when you […]

Introduction JReport is a embedded BI reporting tool can easily extract and visualize data from the Hortonworks Data Platform 2.3 using the Apache Hive JDBC driver. You can then create reports, dashboards, and data analysis, which can be embedded into your own applications. In this tutorial we are going to walkthrough the folllowing steps to […]

Introduction R is a popular tool for statistics and data analysis. It has rich visualization capabilities and a large collection of libraries that have been developed and maintained by the R developer community. One drawback to R is that it’s designed to run on in-memory data, which makes it unsuitable for large datasets. Spark is […]

Apache Zeppelin on HDP 2.4.2 Author: Vinay Shukla In March 2016 we delivered the second technical preview of Apache Zeppelin, on HDP 2.4. Meanwhile we and the Zeppelin community have continued to add new features to Zeppelin. These features are now available in the final technical preview of Apache Zeppelin. This technical preview works with […]

Yet Hortonworks value proposition to Enterprises isn’t one of its software being free — it’s about being 100 percent open source, expanding the Hadoop platform, and being able to support its customers and partners like no one else can.

Hortonworks Inc., whose Hadoop software has been a big driver of the current interest in Big Data, raised $100 million at a valuation of more than $1 billion as it prepares to go public sometime in 2015.

View Past Webinars

The world of IoT is exploding uncontrollably across all industry verticals. Hundreds of IoT use cases have come up in the last couple of years. And, if you are in one of those enterprises implementing an IoT solution, have you wondered how to measure the success of your deployment? Have you assessed your readiness […]

DeepIQ is an industrial data science platform built over open source technologies distributed and supported by Hortonworks. Some of the world’s largest enterprises are leveraging DeepIQ to accelerate their data analytic initiatives by more than 10x and to realize ROI within the first year of their big data program. In this LIVE presentation you […]

With the increase in data volumes and explosion in the number of devices, current cybersecurity tools are challenged in processing the millions of events and providing fast, reliable insights. In this webinar, Matt Koehr, Solutions Engineer with Hortonworks, will discuss the capabilities of the Hortonworks Cybersecurity Platform (HCP) from a public sector perspective, including: […]

Data Lifecycle Manager (DLM) delivers on the promise of location-agnostic, secure replication — enabling data to be encapsulated and copied seamlessly across the physical private storage and public cloud environments for hybrid data mobility to enable right workload in the right environment for the right use case. In this exclusive Premier Inside Out, you […]

Cloud computing promises agility, flexibility, and lower costs. But the promise is only fulfilled if the cloud is part of a comprehensive data strategy based on your unique business needs, use cases, and objectives. Achieving the business outcomes you seek requires creating a unified application environment spanning on-premises and public clouds and offering consistent […]

Join experts from Hortonworks to learn how HDP 3.0 provides: Faster time to deployment and insights with the support of 3rd party apps in Docker containers Hybrid data environment to enable on-premises and cloud deployments, and cloud storage support for AWS S3, WASB, ADLS and GCS (tech preview) Data Science performance improvements around Apache […]

Cloud-computing adoption has been increasing rapidly, with cloud-specific spending expected to grow at more than six times the rate of general IT spending through 2020; 80% of the F500 organizations plan to have more than 10 percent of their workloads in cloud platforms in three years! In this session, we will share examples of […]

Cloud-computing adoption has been increasing rapidly, with cloud-specific spending expected to grow at more than six times the rate of general IT spending through 2020; 80% of the F500 organizations plan to have more than 10 percent of their workloads in cloud platforms in three years! In this session, we will share examples of […]

HDFS was designed with the notion that all metadata is always memory resident. However, as this metadata becomes too large to hold in memory, HDFS starts to struggle. In this exclusive Premier Inside Out, you will hear from Apache committers Arpit Agarwal and Anu Engineer. These Hortonworkers will explain various solutions to this problem, share […]

Successfully establishing a Data Lake can accelerate your firm’s ability to leverage data for competitive advantage, however, delivering these transformational projects requires comprehensive security and governance as well as a simple way to capture and ingest data into your lake. Join the Hortonworks team on this short webinar, where we will share some of our […]

This website uses cookies for analytics, personalisation and advertising. To learn more or change your cookie settings, please read our Cookie Policy. By continuing to browse, you agree to our use of cookies.