I was thinking the best way for me to work with it would be to use the Javaclass and just use that as is.

Imported it into my project and tried to work with it as is, by justinstantiating the ColumnInterpreter as BigDecimalColumnInterpreter. Okay,threw errors and also complained about not knowing where to find such aclass.

So I did some reading and found out, that I'd need to have an Endpoint forit. So I imported AggregateImplementation and AggregateProtocol into myworkspace, renamed them, and refactored them where necessary to takeBigDecimal. Re-exported the jar, then and had another try.

I get supplied with doubles from sensors, but in the end I loose too muchprecision if I do my aggregations on double, otherwise I'd go for it.I use 0.92.1, from Cloudera CDH4.I've done some initial testing with LongColumnInterpreter on a dataset thatI've generated, to do some testing and get accustomed to stuff, but thatworked like a charm after some initial stupidity on my side.So now I'm trying to do some testing with the real data, which comes in asdouble and gets parsed to BigDecimal before writing.

I have been running the same class on my distributed cluster foraggregation. It has been working fine. The only difference is that i usethe methods provided incom.intuit.ihub.hbase.poc.aggregation.client.AggregationClient<eclipse-javadoc:%E2%98%82=Hbase_cdh4/src%3Ccom.intuit.ihub.hbase.poc.aggregation.client%7BAggregationClient.java%E2%98%83AggregationClient>class.IMHO, you don't need to define an Endpoint for using theBigDecimalColumnInterpreter.

I presume you mean something like this: Scan scan = new Scan(_start, _end); scan.addFamily(family.getBytes()); final ColumnInterpreter<BigDecimal, BigDecimal> ci = newmypackage.BigDecimalColumnInterpreter(); AggregationClient ag = new org.apache.hadoop.hbase.client.coprocessor.AggregationClient(config); BigDecimal sum = ag.sum(Bytes.toBytes(tableName), newBigDecimalColumnInterpreter(), scan);When I call this,with the Endpoint in place and loaded as a jar, I get theabove error.When I call it without the endpoint loaded as coprocessor, though, I getthis:

You need to add the column qualifier explicitly in the scanner. You haveonly added the column family in the scanner.I am also assuming that you are writing a ByteArray of BigDecimal object asvalue of these cells in HBase. Is that right?

Yes, we do. :)Let me know the outcome. If you look at the BD ColumnInterpreter, getValuemethod is converting the byte array into BigDecimal. So you should not haveany problem. The BD ColumnInterpreter is pretty similar toLongColumnInterpreter.

Here is the code snippet for getValue() method which will convert Byte[] toBigDecimal:

I haven't really gotten to working on this, since last wednesday.Checked readFields() and write() today, but don't really see, why I wouldneed to reimplement those. Admittedly I'm not that into the whole Hbasecodebase, yet, so there is a good chance I'm missing something, here.

Also, Anil, what hbase library are you coding this against?It does seem like madness, that even though, we're both using thisidentically it does not work for me.

I am using only cdh4 libraries. I use the jars present under hadoop andhbase installed dir. In my last email i gave you some more pointers. Try tofollow them and see what happens.If then also it doesn't works for you, then i will try to write an utilityto test BigDecimalColumnInterpreter on your setup also.

so I'm slowly getting an overview of the code, here. I haven't reallyunderstood the problem yet, though.

DataInput and DataOutput cannot handle BigDecimal, which seems to besomehwere close to the root cause of the problem.The error is being triggered in HBaseServer on line1642 param.readFields(dis);which calls org.apache.hadoop.io.writable, which implements write andreadFields(), and I assume is being implemented byHbaseObjectWritable#readFields.In HbaseObjectWritable#readObject DataInput in then gets checked for beinga primitive data type and read accordingly.

Now if I interpret Bytes#valueOf() correctly, it just takes a BigDecimalvalue and converts _just_ the value to byte[] and not the whole object. Sowhat readObject finds here, should be interpreted as byte[] and happilypassed on. The first method, that should even care about parsing this toBigDecimal would then be BigDecimalColumnInterpreter#getValue()

To test this, I decided to overwrite write and readFields, as I inheritthem from Exec, anyway, however I have no understanding of how thesemethods work.I put in a few printlns to get a feeling for it, but turns out it is nevereven beling called, at all.2012/9/10 anil gupta <[EMAIL PROTECTED]>

I am still failing to understand how the sameBigDecimalColumnInterpreter(BDCI) is working at my end. I am doing theinterpretation of the bytes in getValue method and i havent faced anytrouble yet. There has to be some difference between our set-up or you havetaken a very different approach using BDCI. I have created a util classwhich will create a table in Hbase and then run a aggregation. Here is thelink to class: https://dl.dropbox.com/u/64149128/TestBigDecimalCI.java .Please make sure that you use the correct libraries at client and serverside. You can run this code directly from the client side. I am using HBase0.92.1(cdh4.0.0). Let me know the outcome or if you face any error inrunning that class. I dont have the cluster access right now. If possible,tomorrow in morning i will also try to provide the stack trace of methodcall of BigDecimalColumnInterpreter.getValue().

@Ted: It would be of great help if you can also give the above mentionedutility a try on your own set-up and let me know the outcome.

The regionservers need to have the jar which contains theBigDecimalColumnInterpreter class in their classpath. I was successfullyable to run the utility class once i copied the jar across the cluster andrebooted the cluster. Also, please specify the following property inhbase-site.xml to load the co-processor:<property> <name>hbase.coprocessor.region.classes</name>

> Hi Guys,>> The regionservers need to have the jar which contains the> BigDecimalColumnInterpreter class in their classpath. I was successfully> able to run the utility class once i copied the jar across the cluster and> rebooted the cluster. Also, please specify the following property in> hbase-site.xml to load the co-processor:> <property>> <name>hbase.coprocessor.region.classes</name>>> <value>org.apache.hadoop.hbase.coprocessor.AggregateImplementation</value>> </property>>> Let me know once you guys try this out.>> Thanks,> Anil Gupta>>

Hi, thank you Anil. I realized I had put the jar only on the master andexpected stuff to work from there. I now deployed it on all theregionservers and loaded AggregateImplementation on the regions. It worksnow. Thank you!