Apache AccumuloApache Accumulohttps://blogs.apache.org/accumulo/feed/entries/atom2018-05-05T13:59:11+00:00Apache Rollerhttps://blogs.apache.org/accumulo/entry/balancing_groups_of_tabletsBalancing Groups of TabletsKeith Turner2015-03-20T15:24:48+00:002016-12-14T15:42:19+00:00<p>This post was moved <a href="https://accumulo.apache.org/blog/2015/03/20/balancing-groups-of-tablets.html" title="Updated location">to the Accumulo project site</a>. <br></p>
<p>Accumulo has a pluggable tablet balancer that decides where tablets should be placed.&nbsp; Accumulo&#39;s default configuration spreads each tables tablets evenly and randomly across the tablet servers.&nbsp; Each table can configure a custom balancer that does something different.</p>
<p>For some applications to perform optimally, sub-ranges of a table need to be spread evenly across the cluster.&nbsp; Over the years I have run into multiple use cases for this situation.&nbsp; The latest use case was <a href="https://github.com/fluo-io/fluo/issues/361">bad performance</a> on the <a href="http://fluo.io">Fluo</a> <a href="https://github.com/fluo-io/fluo-stress">Stress Test</a>.&nbsp; This test stores a tree in an Accumulo table and creates multiple tablets for each level in the tree.&nbsp; In parallel, the test reads data from one level and writes it up to the next level.&nbsp; Figure 1 below shows an example of tablet servers hosting tablets for different levels of the tree.&nbsp; Under this scenario if many threads are reading data from level 2 and writing up to level 1, only Tserver 1 and Tserver 2 will be utilized.&nbsp; So in this scenario 50% of the tablet servers are idle.</p>
<div align="center">
<p><a href="https://blogs.apache.org/accumulo/mediaresource/265a0395-e163-4123-a753-a5d264d84f3f"><img alt="Figure 1" src="https://blogs.apache.org/accumulo/mediaresource/265a0395-e163-4123-a753-a5d264d84f3f"></a></p>
<p><i>Figure 1</i><br></p>
</div>
<p><a href="https://issues.apache.org/jira/browse/ACCUMULO-3439">ACCUMULO-3439</a> remedied this situation with the introduction of the<a href="https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob;f=server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java;hb=b0815affade66ab04ca27b6fc3abaac400097469"> GroupBalancer</a> and <a href="https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob;f=server/base/src/main/java/org/apache/accumulo/server/master/balancer/RegexGroupBalancer.java;hb=51fbfaf0a52dc89e8294c86c30164fb94c9f644c">RegexGroupBalancer</a> which will be available in Accumulo 1.7.0.&nbsp; These balancers allow a user to arbitrarily group tablets.&nbsp; Each group defined by the user will be evenly spread across the tablet servers.&nbsp; Also, the total number of groups on each tablet server is minimized.&nbsp;&nbsp; As tablets are added or removed from the table, the balancer will migrate tablets to satisfy these goals.&nbsp; Much of the complexity in the GroupBalancer code comes from trying to minimize the number of migrations needed to reach a good state.</p>
<p>A GroupBalancer could be configured for the table in figure 1 in such a way that it grouped tablets by level.&nbsp; If this were done, the result may look like Figure 2 below.&nbsp; With this tablet to tablet server mapping, many threads reading from level 2 and writing data up to level 1 would utilize all of the tablet servers yielding better performance. <br></p>
<p align="center"><a href="https://blogs.apache.org/accumulo/mediaresource/dbf9933e-efa3-428a-857e-96d2a28de4d5"><img src="https://blogs.apache.org/accumulo/mediaresource/dbf9933e-efa3-428a-857e-96d2a28de4d5" alt="Figure 2"></a></p>
<p align="center"><i>Figure 2</i><br></p>
<p><a href="https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob;f=docs/src/main/resources/examples/README.rgbalancer;hb=51fbfaf0a52dc89e8294c86c30164fb94c9f644c">README.rgbalancer</a> provides a good example of configuring and using the RegexGroupBalancer.&nbsp; If a regular expression can not accomplish the needed grouping, then a grouping function can be written in Java.&nbsp; Extend GroupBalancer to write a grouping function in java.&nbsp; RegexGroupBalancer provides a good example of how to do this.</p>
<p>When using a GroupBalancer, how Accumulo automatically splits tablets must be kept in mind.&nbsp; When Accumulo decides to split a tablet, it chooses the shortest possible row prefix from the tablet data that yields a good split point. Therefore its possible that a split point that is shorter than what is expected by a GroupBalancer could be chosen.&nbsp; The best way to avoid this situation is to pre-split the table such that it precludes this possibility.<br><br>The Fluo Stress test is a very abstract use case.&nbsp; A more concrete use case for the group balancer would be using it to ensure tablets storing geographic data were spread out evenly.&nbsp; For example consider <a href="https://ngageoint.github.io/geowave/">GeoWave&#39;s</a> Accumulo<a href="http://ngageoint.github.io/geowave/documentation.html#architecture-accumulo"> Persistence Model</a>.&nbsp; Tablets could be balanced such that bins related to different regions are spread out evenly.&nbsp; For example tablets related to each continent could be assigned a group ensuring data related to each continent is evenly spread across the cluster.&nbsp; Alternatively, each Tier could spread evenly across the cluster.&nbsp;&nbsp; <br></p>https://blogs.apache.org/accumulo/entry/thinking_about_reads_over_accumuloFunctional reads over Accumuloelserj2014-07-09T20:55:13+00:002016-12-14T15:50:14+00:00<p>&nbsp;This post was moved <a href="https://accumulo.apache.org/blog/2014/07/09/functional-reads-over-accumulo.html" title="Updated location">to the Accumulo project site</a>.</p>
<p>Table structure is a common area of discussion between all types of Accumulo users. In the relational database realm, there was often a straightforward way that most users could agree upon that would be ideal to store and query some dataset. Data was identified by its schema, some fixed set of columns where each value within that column had some given characteristic. One of the big pushes behind the &quot;NoSQL&quot; movement was a growing pain in representing evolving data within a static schema. Applications like Accumulo removed that notion for a more flexible layout where the columns vary per row, but this flexibility often sparks debates about how data is &quot;best&quot; stored that often ends without a clear-cut winner.</p>
<p>In general, I&#39;ve found that, with new users to Accumulo, it&#39;s difficult to move beyond the basic concept of GETs and PUTs of some value for a key. Rightfully so, it&#39;s analogous to a spreadsheet: get or update the cell in the given row and column. However, there&#39;s a big difference in that the spreadsheet is running on your local desktop, instead of running across many machines. In the same way, while a local spreadsheet application has some similar functionality to Accumulo, it doesn&#39;t really make sense to think about using Accumulo as you would a spreadsheet application. Personally, I&#39;ve developed a functional-programming-inspired model which I tend to follow when implementing applications against Accumulo. The model encourages simple, efficient and easily testable code, mainly as a product of modeling the client interactions against Accumulo&#39;s APIs.<br></p>
<h4>Read APIs</h4>
<p>Accumulo has two main classes for reading data from an Accumulo table: the Scanner and BatchScanner. Both accept Range(s) which limit the data read from the table based on a start and stop Key. Only data from the table that falls within those start and stop keys will be returned to the client. The reason that we have two &quot;types&quot; of classes to read data is that a Scanner will return data from a single Range in sorted order whereas the BatchScanner accepts multiple Ranges and returns the data unordered. In terms of Java language specifics, both the Scanner and BatchScanner are also Iterables, which return a Java Iterator that can be easily passed to some other function, transformation or for-loop.<br></p>
<p>Having both a sorted, synchronous stream and an unsorted stream of Key-Value pairs from many servers in parallel allows for a variety of algorithms to be implemented against Accumulo. Both constructs allow for the transparency in where the data came from and encourage light-weight processing of those results on the client.<br></p>
<h4>Accumulo Iterators</h4>
<p>One notable feature of Accumulo is the SortedKeyValueIterator interface, or, more succinctly, Accumulo Iterators. Typically, these iterators run inside of the TabletServer process and perform much of the heavy lifting. Iterators are used to implement a breadth of internal features such as merged file reads, visibility label filtering, versioning, and more. However, users also have the ability to leverage this server-side processing mechanism to deploy their own custom code.</p>
<p>One interesting detail about these iterators is that they each have an implicit source which provides them data to operate on. This source is also a SortedKeyValueIterator which means that the &quot;local&quot; SortedKeyValueIterator can use its own API on its data source. With this implicit hierarchy, Iterators act in concert with each other in some fixed order - they are stackable. The order in which Iterators are constructed, controlled by an Iterator&#39;s priority, determines the order of the stack. An Iterator uses its &quot;source&quot; Iterator to read data, performs some operation, and then passes it on (the next element could be a client or another Iterator). The design behind iterators deserves its own blog post; however, the concept to see here is that iterators are best designed as stateless as possible (transformations, filters, or aggregations that always net the same results given the same input).<br></p>
<h4>Functional Influences</h4>
<p>In practice, these two concepts mesh very well with each other. Data read from a table can be thought of as a &quot;stream&quot; which came from some number of operations on the server. For a Scanner, this stream of data is backed by one tablet at a time to preserve sorted-order of the table. In the case of the BatchScanner, this is happening in parallel across many tablets from many tabletservers, with the client receiving data from many distinct hosts at one time. Likewise, the Scanner and BatchScanner APIs also encourage stateless processing of this data by presenting the data as a Java Iterator. Exposing explicit batches of Key-Value pairs would encourage blocking processing of each batch would be counter-intuitive to what the server-side processing model is. It creates a more seamless implementation paradigm on both the client and the server.<br></p>
<p>When we take a step back from Object-Oriented Java and start to think about applications in a Functional mindset, it becomes clear how these APIs encourage functional-esque code. We are less concerned about mutability and encapsulation, and more concerned about stateless operations over some immutable data. Modeling our client code like this helps encourage parallelism as application in some multi-threaded environment is much simpler.<br></p>
<h4>Practical Application</h4>
<p>I started out talking about schemas and table layouts which might seem a bit unrelated to this discussion on the functional influences in the Accumulo API. Any decisions made on a table structure must take query requirements with respect to the underlying data into account. As a practical application of what might otherwise seem like pontification, let&#39;s consider a hypothetical system that processes clickstream data using Accumulo.</p>
<p>Clickstream data refers to logging users who visit a website, typically for the purpose of understanding usage patterns. If a website is thought of as a directed graph, where an anchor on one page which links to another page is an edge in that graph, a user&#39;s actions on that website can be thought of as a &quot;walk&quot; over that graph. In managing a website, it&#39;s typically very useful to understand usage patterns of your site: what page is most common? which links are most commonly clicked? what changes to a page make users act differently?<br></p>
<p>Now, let&#39;s abstractly consider that we store this clickstream data in Accumulo. Let&#39;s not go into specifics, but say we retain the typical row-with-columns idea: each row represents some user visiting a page on your website using a globally unique identifier. Each column would contain some information about that visit: the user who is visiting the website, the page they&#39;re visiting, the page they came from, the web-browser user-agent string, etc. Say you&#39;re the owner of this website, and you recently made a modification to you website which added a prominent link to some new content on the front-page. You want to know how many people are visiting your new content with this new link you&#39;ve added, so we want to answer the question &quot;how many times was our new link on the index page clicked by any user?&quot;. For the purposes of this example, let&#39;s assume we don&#39;t have any index tables which might help us answer this query more efficiently.</p>
<p>Let&#39;s think about this query in terms of stateless operations and performing as much of a reduction in data returned to the client as possible. We have a few basic steps:</p>
<ol>
<li>Filter: Ignore all clickstream events that are not for the index page.</li>
<li>Filter: Ignore all clickstream events that are not for the given anchor.</li>
<li>Aggregation: Only a sum of the occurrences is needed, not the full record.</li>
</ol>
<p>The beauty in using Accumulo is that all three of these operations can be performed inside of the tablet server process without returning any unnecessary data to the client. Unwanted records can be easily skipped, while each record that matches our criteria is reduced to a single &quot;+1&quot; counter. Instead of returning each full record to the client, the tablet server can combine these counts together and simply return a sum to the client for the Range of data that was read.</p>
<p>The other perk of thinking about the larger problem in discrete steps is that it is easily parallelized. Assuming we have many tablet servers hosting the tablets which make up our clickstream table, we can easily run this query in parallel across them all using the BatchScanner. Additionally, because we&#39;ve reduced our initial problem from a large collection of records to a stream of partial sums, we&#39;ve drastically reduced the amount of work that must be performed on our (single) client. Each key-value pair returned by a server is a partial-sum which can be combined together in a very lightweight operation (in both memory and computation) as the result is made available. The client then has the simple task of performing one summation. We took a hard problem and performed an extreme amount of heavy lifting server-side while performing next-to-no computation in our client which is great for web applications or thin clients.<br></p>
<h4>Tiered Computation</h4>
<p>This type of algorithm, a multi-stage computation, becomes very common when working with Accumulo because of the ability to push large amounts of computation to each tablet server. Tablet servers can compute aggregations, filters and/or transformations very &quot;close&quot; to the actual data, return some reduced view of the data being read. Even when some function is very efficient, computing it over large data sets can still be extremely time-consuming. Eliminating unwanted data as early as possible can often outweigh even the most optimal algorithms due to the orders of magnitude difference in the speed of CPU over disk and network.<br></p>
<p>It&#39;s important to remember that this idea isn&#39;t new, though. The above model is actually very reminiscent of the MapReduce paradigm, just applied with different constraints. The types of problems efficiently solvable by MapReduce is also a super-set of what is possible with one representation of data stored in Accumulo. This also isn&#39;t a recommendation Accumulo Iterators are not a complete replacement for MapReduce (a tool is rarely a 100% &quot;better&quot; replacement for another). In fact, Accumulo Iterators are often used as another level of computation to make an existing MapReduce job more efficient, typically through the AccumuloInputFormat.</p>
<p>We&#39;ve identified a category of problems - a function is applied to a batch of key-value pairs which reduces the complexity of a question asked over a distributed dataset - in which the features and APIs of Accumulo lend themselves extremely well to solving in an efficient and simple manner. The ability to leverage Accumulo to perform these computations requires foresight into the types of questions that are to be asked of a dataset, the structure of the dataset within Accumulo, and the reduction of a larger problem into discrete functions which are each applied to the dataset by an Accumulo Iterator.<br></p>https://blogs.apache.org/accumulo/entry/scaling_accumulo_with_multi_volumeScaling Accumulo With Multi-Volume Supportdlmarion2014-06-25T01:10:31+00:002016-12-14T15:52:55+00:00<p>&nbsp;This post was moved <a href="https://accumulo.apache.org/blog/2014/06/25/scaling-accumulo-with-multivolume-support.html" title="Updated location">to the Accumulo project site</a>. <br></p>
<p class="western">MapReduce is a commonly used approach to
querying or analyzing large amounts of data. Typically MapReduce jobs
are created using using some set of files in HDFS to produce a
result. When new files come in, they get added to the set, and the
job gets run again. A common Accumulo approach to this scenario is to
load all of the data into a single instance of Accumulo.
</p>
<p class="western"> A single instance of Accumulo can scale quite
largely[1,2] to accommodate high levels of ingest and query. The manner
in which ingest is performed typically depends on latency
requirements. When the desired latency is small, inserts are
performed directly into Accumulo. When the desired latency is allowed
to be large, then a bulk style of ingest[3] can be used. There are
other factors to consider as well, but they are outside the scope of
this article.</p>
<p class="western"> On large clusters using the bulk style of ingest
input files are typically batched into MapReduce jobs to create a set
of output RFiles for import into Accumulo. The number of files per
job is typically determined by the required latency and the number of
MapReduce tasks that the cluster can complete in the given
time-frame. The resulting RFiles, when imported into Accumulo, are
added to the list of files for their associated tablets. Depending on
the configuration this will cause Accumulo to major compact these
tablets. If the configuration is tweaked to allow more files per
tablet, to reduce the major compactions, then more files need to be
opened at query time when performing scans on the tablet. Note that
no single node is burdened by the file management; but, the number of
file operations in aggregate is very large. If each server has
several hundred tablets, and there are a thousand tablet servers, and
each tablet compacts some files every few imports, we easily have
50,000 file operations (create, allocate a block, rename and delete)
every ingest cycle.</p>
<p class="western"> In addition to the NameNode operations caused by
bulk ingest, other Accumulo processes (e.g. master, gc) require
interaction with the NameNode. Single processes, like the garbage
collector, can be starved of responses from the NameNode as the NameNode is
limited on the number of concurrent operations. It is not unusual for
an operator&#39;s request for “hadoop
fs -ls /accumulo” to take a minute before returning results
during the peak file-management periods. In particular, the file
garbage collector can fall behind, not finishing a cycle of
unreferenced file removal before the next ingest cycle creates a new
batch of files to be deleted.
</p>
<p class="western"> The Hadoop community addressed the NameNode
bottleneck issue with HDFS federation[4] which allows a datanode to
serve up blocks for multiple namenodes. Additionally, ViewFS allows
clients to communicate with multiple namenodes through the use of a
client-side mount table. This functionality was insufficient for
Accumulo in the 1.6.0 release as ViewFS works at a directory level; as an example, /dirA is mapped to
one NameNode and /dirB is mapped to another, and Accumulo uses a
single HDFS directory for its storage.</p>
<p class="western"> Multi-Volume support (MVS), included in 1.6.0,
includes the changes that allow Accumulo to work across multiple HDFS
clusters (called volumes in Accumulo) while continuing to use a
single HDFS directory. A new property, instance.volumes, can be
configured with multiple HDFS nameservices and Accumulo will use them
all to balance out NameNode operations. The nameservices configured
in instance.volumes may optionally use the High Availability NameNode feature as it is transparent
to Accumulo. With MVS you have two options to horizontally scale your
Accumulo instance. You can use an HDFS cluster with Federation and
multiple NameNodes or you can use separate HDFS clusters.</p>
<p class="western"> By default Accumulo will perform round-robin file
allocation for each tablet, spreading the files across the different
volumes. The file balancer is pluggable, allowing for custom
implementations. For example, if you don&#39;t use Federation and use
multiple HDFS clusters, you may want to allocate all files for a
particular table to one volume.</p>
<p class="western"> Comments in the JIRA[5] regarding backups could
lead to follow-on work. With the inclusion of snapshots in HDFS, you
could easily envision an application that quiesces the database or
some set of tables, flushes their entries from memory, and snapshots
their directories. These snapshots could then be copied to another
HDFS instance either for an on-disk backup, or bulk-imported into
another instance of Accumulo for testing or some other use.</p>
<p class="western"> The example configuration below shows how to
set up Accumulo with HA NameNodes and Federation, as it is likely the
most complex. We had to reference several web sites, one of the HDFS
mailing lists, and the source code to find all of the configuration
parameters that were needed. The configuration below includes two
sets of HA namenodes, each set servicing an HDFS nameservice in a
single HDFS cluster. In the example below, nameserviceA is serviced
by name nodes 1 and 2, and nameserviceB is serviced by name nodes 3
and 4.</p>
<p class="western">[1]
<span lang="zxx"><a class="western" href="http://ieeexplore.ieee.org/zpl/login.jsp?arnumber=6597155">http://ieeexplore.ieee.org/zpl/login.jsp?arnumber=6597155</a><span style="text-decoration:none;"><span style="color:#000000;"></span></span></span></p>
<p class="western"><span lang="zxx"><span style="text-decoration:none;"><span style="color:#000000;">[2</span>]
</span></span><span lang="zxx"><a class="western" href="http://www.pdl.cmu.edu/SDI/2013/slides/big_graph_nsa_rd_2013_56002v1.pdf">http://www.pdl.cmu.edu/SDI/2013/slides/big_graph_nsa_rd_2013_56002v1.pdf</a></span></p>
<p> </p>
<p class="western">[3]
<span lang="zxx"><a class="western" href="http://accumulo.apache.org/1.5/examples/bulkIngest.html">http://accumulo.apache.org/1.6/examples/bulkIngest.html</a></span></p>
<p> </p>
<p class="western">[4]
<span lang="zxx"><a class="western" href="https://issues.apache.org/jira/browse/HDFS-1052">https://issues.apache.org/jira/browse/HDFS-1052</a></span></p>
<p> </p>
<p class="western">[5]
<span lang="zxx"><a class="western" href="https://issues.apache.org/jira/browse/ACCUMULO-118">https://issues.apache.org/jira/browse/ACCUMULO-118</a></span></p>
<p> </p>
<p class="western"><br><i>- By Dave Marion and Eric Newton</i></p>
<h2>core-site.xml:
</h2>
<pre class="text-body-indent-western"> &lt;property&gt;
&lt;name&gt;fs.defaultFS&lt;/name&gt;
&lt;value&gt;viewfs:///&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;fs.viewfs.mounttable.default.link./nameserviceA&lt;/name&gt;
&lt;value&gt;hdfs://nameserviceA&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;fs.viewfs.mounttable.default.link./nameserviceB&lt;/name&gt;
&lt;value&gt;hdfs://nameserviceB&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;fs.viewfs.mounttable.default.link./nameserviceA/accumulo/instance_id&lt;/name&gt;
&lt;value&gt;hdfs://nameserviceA/accumulo/instance_id&lt;/value&gt;
&lt;description&gt;Workaround for ACCUMULO-2719&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.ha.fencing.methods&lt;/name&gt;
&lt;value&gt;sshfence(hdfs:22)
shell(/bin/true)&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.ha.fencing.ssh.private-key-files&lt;/name&gt;
&lt;value&gt;&lt;PRIVATE_KEY_LOCATION&gt;&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.ha.fencing.ssh.connect-timeout&lt;/name&gt;
&lt;value&gt;30000&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;ha.zookeeper.quorum&lt;/name&gt;
&lt;value&gt;zkHost1:2181,zkHost2:2181,zkHost3:2181&lt;/value&gt;
&lt;/property&gt;
</pre>
<h2>hdfs-site.xml:
</h2>
<pre class="text-body-indent-western"> &lt;property&gt;
&lt;name&gt;dfs.nameservices&lt;/name&gt;
&lt;value&gt;nameserviceA,nameserviceB&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.ha.namenodes.nameserviceA&lt;/name&gt;
&lt;value&gt;nn1,nn2&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.ha.namenodes.nameserviceB&lt;/name&gt;
&lt;value&gt;nn3,nn4&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.namenode.rpc-address.nameserviceA.nn1&lt;/name&gt;
&lt;value&gt;host1:8020&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.namenode.rpc-address.nameserviceA.nn2&lt;/name&gt;
&lt;value&gt;host2:8020&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.namenode.http-address.nameserviceA.nn1&lt;/name&gt;
&lt;value&gt;host1:50070&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.namenode.http-address.nameserviceA.nn2&lt;/name&gt;
&lt;value&gt;host2:50070&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.namenode.rpc-address.nameserviceB.nn3&lt;/name&gt;
&lt;value&gt;host3:8020&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.namenode.rpc-address.nameserviceB.nn4&lt;/name&gt;
&lt;value&gt;host4:8020&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.namenode.http-address.nameserviceB.nn3&lt;/name&gt;
&lt;value&gt;host3:50070&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.namenode.http-address.nameserviceB.nn4&lt;/name&gt;
&lt;value&gt;host4:50070&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.namenode.shared.edits.dir.nameserviceA.nn1&lt;/name&gt;
&lt;value&gt;qjournal://jHost1:8485;jHost2:8485;jHost3:8485/nameserviceA&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.namenode.shared.edits.dir.nameserviceA.nn2&lt;/name&gt;
&lt;value&gt;qjournal://jHost1:8485;jHost2:8485;jHost3:8485/nameserviceA&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.namenode.shared.edits.dir.nameserviceB.nn3&lt;/name&gt;
&lt;value&gt;qjournal://jHost1:8485;jHost2:8485;jHost3:8485/nameserviceB&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.namenode.shared.edits.dir.nameserviceB.nn4&lt;/name&gt;
&lt;value&gt;qjournal://jHost1:8485;jHost2:8485;jHost3:8485/nameserviceB&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.client.failover.proxy.provider.nameserviceA&lt;/name&gt;
&lt;value&gt;org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.client.failover.proxy.provider.nameserviceB&lt;/name&gt;
&lt;value&gt;org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.ha.automatic-failover.enabled.nameserviceA&lt;/name&gt;
&lt;value&gt;true&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.ha.automatic-failover.enabled.nameserviceB&lt;/name&gt;
&lt;value&gt;true&lt;/value&gt;
&lt;/property&gt;
</pre>
<pre></pre>
<h2 class="text-body-indent-western">accumulo-site.xml:</h2>
<pre class="text-body-indent-western"> &lt;property&gt;
&lt;name&gt;instance.volumes&lt;/name&gt;
&lt;value&gt;hdfs://nameserviceA/accumulo,hdfs://nameserviceB/accumulo&lt;/value&gt;
&lt;/property&gt;</pre>https://blogs.apache.org/accumulo/entry/getting_started_with_apache_accumuloGetting Started with Apache Accumulo 1.6.0elserj2014-05-27T16:04:52+00:002016-12-14T15:53:41+00:00<p>&nbsp;This post was moved<a href="https://accumulo.apache.org/blog/2014/05/27/getting-started-with-accumulo-1.6.0.html" title="Updated location"> to the Accumulo project site</a>. <br></p>
<p>On May 12th, 2014, the Apache Accumulo project happily announced version 1.6.0 to the community. This is a new major release for the project which contains many numerous new features and fixes. For the full list of notable changes, I&#39;d recommend that you check out the <a href="http://accumulo.apache.org/release_notes/1.6.0.html" title="Apache Accumulo 1.6.0 release notes">release notes</a> that were published alongside the release itself. For this post, I&#39;d like to cover some of the changes that have been made at the installation level that are a change for users who are already familiar with the project.<br></p>
<h3>Download the release</h3>
<p>Like always, you can find out releases on the our downloads page at <a href="http://accumulo.apache.org/downloads/.">http://accumulo.apache.org/downloads/.</a>&nbsp; You have the choice of downloading the source and building it yourself, or choosing the binary tarball which already contains pre-built jars for use.<br></p>
<h3>Native Maps</h3>
<p>One of the major components of the original <a title="BigTable" href="http://research.google.com/archive/bigtable.html">BigTable</a> design was an &quot;In-Memory Map&quot; which provided fast insert and read operations. Accumulo implements this using a C++ sorted map with a custom allocator which is invoked by the TabletServer using JNI. Each TabletServer uses its own &quot;native&quot; map. It is highly desirable to use this native map as it comes with a notable performance increase over a Java map (which is the fallback when the Accumulo shared library is not found) in addition to greatly reducing the TabletServer&#39;s JVM garbage collector stress when ingesting data.</p>
<p>In previous versions, the binary tarball contained a pre-compiled version of the native library (under lib/native/). Shipping a compiled binary was a convenience but also left much confusion when it didn&#39;t work on systems which had different, incompatible versions of GCC toolchains installed than what the binary was built against. As such, we have stopped bundling the pre-built shared library in favor of users building this library on their own, and instead include an accumulo-native.tar.gz file within the lib directory which contains the necessary files to build the library yourself.<br></p>
<p>To reduce the burden on users, we&#39;ve also introduced a new script inside of the bin directory:</p>
<pre> build_native_map.sh</pre>
<p>Invoking this script will automatically unpack, build and install the native map in $ACCUMULO_HOME/lib/native. If you&#39;ve used older versions of Accumulo, you will also notice that the library name is different in an attempt to better follow standard conventions: libaccumulo.so on Linux and libaccumulo.dylib on Mac OS X.</p>
<h3>Example Configurations</h3>
<p>Apache Accumulo still bundles a set of example configuration files in conf/examples. Each sub-directory contains the complete set of files to run on a single node with the named memory limitations. For example, the files contained in conf/examples/3GB/native-standalone will run Accumulo on a single node, with native maps (don&#39;t forget to build them first!), within a total memory footprint of 3GB. Copy the contents of one of these directories into conf/ and make sure that your relevant installation details (e.g. HADOOP_PREFIX, JAVA_HOME, etc) are properly set in accumulo-env.sh. For example:<br></p>
<pre> cp $ACCUMULO_HOME/conf/examples/3G/native-standalone/* $ACCUMULO_HOME/conf
</pre>
<p>Alternatively, a new script, bootstrap_config.sh, was also introduced that can be invoked instead of manually copying files. It will step through a few choices (memory usage, in-memory map type, and Hadoop major version), and then automatically create the configuration files for you.</p>
<pre> $ACCUMULO_HOME/bin/bootstrap_config.sh
</pre>
<p>One notable change in these scripts over previous versions is that they default to using Apache Hadoop 2 packaging details, such as the Hadoop conf directory and jar locations. It is highly recommended by the community that you use Apache Accumulo 1.6.0 with at least Apache Hadoop 2.2.0, most notably, to ensure that you will not lose data in the face of power failure. If you are still running on a Hadoop 1 release (1.2.1), you will need to edit both accumulo-env.sh and accumulo-site.xml. There are comments in each file which instruct you what needs to be changed.</p>
<h3>Starting Accumulo</h3>
<p>Initializing and starting Accumulo hasn&#39;t changed at all! After you have created the configuration files and, if you&#39;re using them, built the native maps, run:</p>
<pre> accumulo init</pre>
<p>This will prompt you to name your Accumulo instance and set the Accumulo root user&#39;s password, then start Accumulo using </p>
<pre> $ACCUMULO_HOME/bin/start-all.sh
</pre>
<p> </p>
<p> </p>https://blogs.apache.org/accumulo/entry/the_accumulo_classloaderThe Accumulo ClassLoaderdlmarion2014-05-03T23:59:59+00:002016-12-14T15:54:29+00:00<p>&nbsp;This post was moved <a href="https://accumulo.apache.org/blog/2014/05/03/accumulo-classloader.html" title="Updated location">to the Accumulo project site</a>.</p>
<p>The Accumulo classloader is an integral part of the software. The classloader is created before each of the services (master, tserver, gc, etc) are started and it is set as the classloader for that service. The classloader was rewritten in version 1.5 and this article will explain the new behavior.<br></p>
<h2>First, some history<br></h2>
<p>The classloader in version 1.4 used a simple hierarchy of two classloaders that would load classes from locations specified by two properties. The locations specified by the &quot;general.classpaths&quot; property would be used to create a parent classloader and locations specified by the &quot;general.dynamic.classpaths&quot; property were used to create a child classloader. The child classloader would monitor the specified locations for changes and when a change occurred the child classloader would be replaced with a new instance. Classes that referenced the orphaned child classloader would continue to work and the classloader would be garbage collected when no longer referenced. The diagram below shows the relationship between the classloaders in Accumulo 1.4.</p>
<p align="center"> <img src="https://blogs.apache.org/accumulo/mediaresource/e0cd3425-2bfe-4b12-9735-8fee632d8f71"><br></p>
<p>The only place where the dynamic classloader would come into play is for user iterators and their dependencies. The general advice for using this classloader would be to put the jars containing your iterators in the dynamic location. Everything else that does not change very often or would require a restart should be put into the non-dynamic location.<br></p>
<p>There are a couple of things to note about the classloader in 1.4. First, if you modified the dynamic locations too often, you would run out of perm-gen space. This is likely due to unreferenced classes not being unloaded from the JVM. This is captured in <a href="https://issues.apache.org/jira/browse/ACCUMULO-599">ACCUMULO-599</a>. Secondly, when you modified files in dynamic locations within the same cycle, it would on occasion miss the second change.<br></p>
<h2>Out with the old, in with the new<br></h2>
<p>The Accumulo classloader was rewritten in version 1.5. It maintains the same dynamic capability and includes a couple of new features. The classloader uses <a href="http://commons.apache.org/proper/commons-vfs/">Commons VFS</a> so that it can load jars and classes from a variety of sources, including HDFS. Being able to load jars from one location (hdfs, http, etc) will make it easier to deploy changes to your cluster. Additionally, we introduced the notion of classloader contexts into Accumulo. This is not a new concept for anyone that has used an application server, but the implementation is a little different for Accumulo.</p>
<p>The hierarchy set up by the new classloader uses the same property names as the old classloader. In the most basic configuration the locations specified by &quot;general.classpaths&quot; are used to create the root of the application classloader hierarchy. This classloader is a <a href="http://docs.oracle.com/javase/6/docs/api/java/net/URLClassLoader.html">URLClassLoader</a> and it does not support dynamic reloading. If you only specify this property, then you are loading all of your jars from the local file system and they will not be monitored for changes. We will call this top level application classloader the SYSTEM classloader. Next, a classloader is created that supports VFS sources and reloading. The parent of this classloader is the SYSTEM classloader and we will call this the VFS classloader. If the &quot;general.vfs.classpaths&quot; property is set, the VFS classloader will use this location. If the property is not set, it will use the value of &quot;general.dynamic.classpaths&quot; with a default value of $ACCUMULO_HOME/lib/ext to support backwards compatibility. The diagram below shows the relationship between the classloaders in Accumulo 1.5.<br></p>
<p> </p>
<p align="center"> <img src="https://blogs.apache.org/accumulo/mediaresource/69437ddb-5f66-4657-8190-8e71fa10dfe8"><br></p>
<h2>Running Accumulo From HDFS</h2>
<p> If you have defined &quot;general.vfs.classpaths&quot; in your Accumulo configuration, then you can use the bootstrap_hdfs.sh script in the bin directory to seed HDFS with the Accumulo jars. A couple of jars will remain on the local file system for starting services. Now when you start up Accumulo the master, gc, tracer, and all of the tablet servers will get their jars and classes from HDFS. The bootstrap_hdfs.sh script sets the replication on the directory, but you may want to set it higher after bootstrapping. An example configuration setting would be:</p>
<pre> &lt;property&gt;
&lt;name&gt;general.vfs.classpaths&lt;/name&gt;
&lt;value&gt;hdfs://localhost:8020/accumulo/system-classpath&lt;/value&gt;
&lt;description&gt;Configuration for a system level vfs classloader. Accumulo jars can be configured here and loaded out of HDFS.&lt;/description&gt;
&lt;/property&gt;
</pre>
<h2>About Contexts</h2>
<p>You can also define classloader contexts in your accumulo-site.xml file. A context is defined by a user supplied name and it references locations like the other classloader properties. When a context is defined in the configuration, it can then be applied to one or more tables. When a context is applied to a table, then a classloader is created for that context. If multiple tables use the same context, then they share the context classloader. The context classloader is a child to the VFS classloader created above.<br></p>
<p>The goal here is to enable multiple tenants to share the same Accumulo instance. For example, we may have a context called &#39;app1&#39; which references the jars for application A. We may also have another context called app2 which references the jars for application B. By default the context classloader delegates to the VFS classloader. This behavior may be overridden as seen in the app2 example below. The context classloader also supports reloading like the VFS classloader.<br></p>
<pre> &lt;property&gt;
&lt;name&gt;general.vfs.context.classpath.app1&lt;/name&gt;
&lt;value&gt;hdfs://localhost:8020/applicationA/classpath/.*.jar,file:///opt/applicationA/lib/.*.jar&lt;/value&gt;
&lt;description&gt;Application A classpath, loads jars from HDFS and local file system&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;general.vfs.context.classpath.app2.delegation=post&lt;/name&gt;
&lt;value&gt;hdfs://localhost:8020/applicationB/classpath/.*.jar,http://my-webserver/applicationB/.*.jar&lt;/value&gt;
&lt;description&gt;Application B classpath, loads jars from HDFS and HTTP, does not delegate to parent first&lt;/description&gt;
&lt;/property&gt;</pre>
<p>Context classloaders do not have to be defined in the accumulo-site.xml file. The &quot;general.vfs.context.classpath.{context}&quot; property can be defined on the table either programatically or manually in the shell. Then set the &quot;table.classpath.context&quot; property on your table.</p>
<h2>Known Issues</h2>
<p> </p>
<p>Remember the two issues I mentioned above? Well, they are still a problem.</p>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/ACCUMULO-1507">ACCUMULO-1507</a> is tracking <a href="https://issues.apache.org/jira/browse/VFS-487">VFS-487</a> for frequent modifications to files. <br></li>
<li>If you start running out of perm-gen space, take a look at <a href="https://issues.apache.org/jira/browse/ACCUMULO-599">ACCUMULO-599</a> and try applying the JVM settings for class unloading. <br></li>
<li>Additionally, there is an issue with the bootstrap_hdfs.sh script detailed in <a href="https://issues.apache.org/jira/browse/ACCUMULO-2761">ACCUMULO-2761</a>. There is a workaround listed in the issue.</li>
</ul>
<p>Please email the <a href="mailto:dev@accumulo.apache.org">dev</a> list for comments and questions.</p><i>By Dave Marion</i>
<ul> </ul>