Details

Description

I have several large application/HBase clusters where an application node will occasionally need to talk to HBase from a different cluster. In order to help ensure some of my consistency guarantees I have a sentinel table that is updated atomically as users interact with the system. This works quite well for the "regular" hbase client but the REST client does not implement the checkAndPut and checkAndDelete operations. This exposes the application to some race conditions that have to be worked around. It would be ideal if the same checkAndPut/checkAndDelete operations could be supported by the REST client.

Hadoop QA
added a comment - 30/May/12 19:24 -1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12530241/4720_trunk_v3.patch
against trunk revision .
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 3 new or modified tests.
+1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
-1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
-1 core tests. The patch failed these unit tests:
org.apache.hadoop.hbase.master.TestSplitLogManager
org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster
Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2057//testReport/
Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2057//console
This message is automatically generated.

Hadoop QA
added a comment - 30/May/12 18:22 -1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12530230/4720_trunk_v3.1.patch
against trunk revision .
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 3 new or modified tests.
+1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
-1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
-1 core tests. The patch failed these unit tests:
org.apache.hadoop.hbase.master.TestSplitLogManager
org.apache.hadoop.hbase.master.TestLogsCleaner
org.apache.hadoop.hbase.regionserver.TestServerCustomProtocol
Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2055//testReport/
Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2055//console
This message is automatically generated.

Andrew Purtell
added a comment - 30/May/12 17:11 - edited +1 except for small debug logging nit that can be corrected on commit. See https://reviews.apache.org/r/5259/diff/3/?file=110632#file110632line294

Hadoop QA
added a comment - 30/May/12 04:03 -1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12530140/4720_trunk_v3.patch
against trunk revision .
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 3 new or modified tests.
+1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
-1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
-1 core tests. The patch failed these unit tests:
org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster
Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2044//testReport/
Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2044//console
This message is automatically generated.

Hadoop QA
added a comment - 30/May/12 00:02 -1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12530103/4720_trunk_v2.patch
against trunk revision .
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 3 new or modified tests.
+1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
-1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
-1 core tests. The patch failed these unit tests:
org.apache.hadoop.hbase.security.access.TestZKPermissionsWatcher
Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2037//testReport/
Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2037//console
This message is automatically generated.

Jimmy Xiang
added a comment - 29/May/12 23:11 Addressed Stack's comments.
@Stack, I moved the duplicated code to some shared place.
I defined constants put and delete. I could not lowercase first
since it can be null.

Hadoop QA
added a comment - 29/May/12 22:32 -1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12530092/4720_trunk.patch
against trunk revision .
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 3 new or modified tests.
+1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
-1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
+1 core tests. The patch passed unit tests in .
Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2033//testReport/
Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2033//console
This message is automatically generated.

Patch v7 still applies to trunk, but with some fuzz. Implementation looks fine, appropriate use of query parameters and HTTP return codes, thanks for implementing it this way Mubarak. The only thing that looks missing is a check for the not-modified case in TestRowResource and TestDelete.

Andrew Purtell
added a comment - 16/May/12 02:34 Patch v7 still applies to trunk, but with some fuzz. Implementation looks fine, appropriate use of query parameters and HTTP return codes, thanks for implementing it this way Mubarak. The only thing that looks missing is a check for the not-modified case in TestRowResource and TestDelete.

Anyway, my apologies, but this commit must be reverted. It is poor and IMO unacceptable design to have such an inconsistency in how request URLs should be constructed. Either the current semantics for REST paths remain the same, that is /table/row/ ..., or every operation is changed to specify the operation first, e.g. /operation/table/row ... .

Andrew Purtell
added a comment - 24/Jan/12 18:12 Anyway, my apologies, but this commit must be reverted. It is poor and IMO unacceptable design to have such an inconsistency in how request URLs should be constructed. Either the current semantics for REST paths remain the same, that is /table/row/ ..., or every operation is changed to specify the operation first, e.g. /operation/table/row ... .

Ted, I raised a objection on this issue and you committed it with that objection unaddressed before I had a chance to ack the commit candidate. I apologize if it was not clear my objection earlier was a -1. It was. I thought it clear enough, my mistake.

Andrew Purtell
added a comment - 24/Jan/12 18:10 Ted, I raised a objection on this issue and you committed it with that objection unaddressed before I had a chance to ack the commit candidate. I apologize if it was not clear my objection earlier was a -1. It was. I thought it clear enough, my mistake.

Ted Yu
added a comment - 24/Jan/12 18:06 @Andrew:
Glad to see your response.
Mubarak responded to your comment at the end of:
https://issues.apache.org/jira/browse/HBASE-4720?focusedCommentId=13184647&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13184647
which refers to Stack's comment: https://issues.apache.org/jira/browse/HBASE-4720?focusedCommentId=13169969&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13169969
I have been requesting your comment since Jan. 12th: https://issues.apache.org/jira/browse/HBASE-4720?focusedCommentId=13185150&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13185150
I didn't see a -1 until this morning.

Andrew Purtell
added a comment - 24/Jan/12 17:52 Either the current semantics for REST paths remain the same, that is /table/row/ ..., or every operation is changed to specify the operation first, e.g. /operation/table/row ... .

Andrew Purtell
added a comment - 24/Jan/12 17:50 Actually Ted you should now honor my -1 and revert that commit. Next time allow me chance to find out there is even a discussion happening. Stack brought this to my attention late last night only.

The URI format for requests is '/<table>/<row>/ ...' This violates that by adding, just for check-and cases, a prefix. Having a special case like that should be avoided. What about handling this in TableResource, with a query parameter? '/<table>/<row>/?check' E.g.Then you won't need CheckAndXTableResource classes. Additionally use the appropriate HTTP operations. PUT/POST for check-and-put. DELETE for check-and-delete. The spec does not forbid bodies in DELETE requests. (I am unsure if Jetty/Jersey will support it however.)

We have discussed the design choices earlier (refer comments in the same JIRA), Stack and Ted have voted for option # 2 (/checkandput, /checkanddelete) option. If i have to go back to option #1 then i will have to re-work most of the stuff here.

Andrew Purtell
added a comment - 24/Jan/12 17:49 Ted, I am going to revert your commit.
The URI format for requests is '/<table>/<row>/ ...' This violates that by adding, just for check-and cases, a prefix. Having a special case like that should be avoided. What about handling this in TableResource, with a query parameter? '/<table>/<row>/?check' E.g.Then you won't need CheckAndXTableResource classes. Additionally use the appropriate HTTP operations. PUT/POST for check-and-put. DELETE for check-and-delete. The spec does not forbid bodies in DELETE requests. (I am unsure if Jetty/Jersey will support it however.)
We have discussed the design choices earlier (refer comments in the same JIRA), Stack and Ted have voted for option # 2 (/checkandput, /checkanddelete) option. If i have to go back to option #1 then i will have to re-work most of the stuff here.
This has not changed, therefore -1.

The REST gateway does support a batch put operation, where the supplied model contains multiple rows. The request URI will contain the table name and a row key, but the row key would be ignored and should be set to something known not to exist, like "submit". (Row name in the model takes preference to whatever was supplied in the URI.) See RowResource, starting around line 160. This gives the client the option of submitting work in batch, to reduce overheads.

So optionally here you could retrieve a list of rows and process them, building a response that includes the disposition of each.

The URI format for requests is '/<table>/<row>/ ...' This violates that by adding, just for check-and cases, a prefix. Having a special case like that should be avoided. What about handling this in TableResource, with a query parameter? '/<table>/<row>/?check' E.g.Then you won't need CheckAndXTableResource classes. Additionally use the appropriate HTTP operations. PUT/POST for check-and-put. DELETE for check-and-delete. The spec does not forbid bodies in DELETE requests. (I am unsure if Jetty/Jersey will support it however.)

We have discussed the design choices earlier (refer comments in the same JIRA), Stack and Ted have voted for option # 2 (/checkandput, /checkanddelete) option. If i have to go back to option #1 then i will have to re-work most of the stuff here.

Mubarak Seyed
added a comment - 12/Jan/12 01:38 This patch does not cover the following from Andrew's comments:
The REST gateway does support a batch put operation, where the supplied model contains multiple rows. The request URI will contain the table name and a row key, but the row key would be ignored and should be set to something known not to exist, like "submit". (Row name in the model takes preference to whatever was supplied in the URI.) See RowResource, starting around line 160. This gives the client the option of submitting work in batch, to reduce overheads.
So optionally here you could retrieve a list of rows and process them, building a response that includes the disposition of each.
HTable.checkAndPut and HTable.checkAndDelete
API supports only one row at a time. I don't think we need to support batch of checkAndPut and checkAndDelete.
The URI format for requests is '/<table>/<row>/ ...' This violates that by adding, just for check-and cases, a prefix. Having a special case like that should be avoided. What about handling this in TableResource, with a query parameter? '/<table>/<row>/?check' E.g.Then you won't need CheckAndXTableResource classes. Additionally use the appropriate HTTP operations. PUT/POST for check-and-put. DELETE for check-and-delete. The spec does not forbid bodies in DELETE requests. (I am unsure if Jetty/Jersey will support it however.)
We have discussed the design choices earlier (refer comments in the same JIRA), Stack and Ted have voted for option # 2 (/checkandput, /checkanddelete) option. If i have to go back to option #1 then i will have to re-work most of the stuff here.

>The REST gateway does support a batch put operation, where the supplied model contains multiple rows. The request URI will contain the table name and a row key, but the row key would be ignored and should be set to something known not to exist, like "submit". (Row name in the model takes preference to whatever was supplied in the URI.) See RowResource, starting around line 160. This gives the client the option of submitting work in batch, to reduce overheads.

So optionally here you could retrieve a list of rows and process them, building a response that includes the disposition of each.

>The URI format for requests is '/<table>/<row>/ ...' This violates that by adding, just for check-and cases, a prefix. Having a special case like that should be avoided. What about handling this in TableResource, with a query parameter? '/<table>/<row>/?check' E.g.Then you won't need CheckAndXTableResource classes. Additionally use the appropriate HTTP operations. PUT/POST for check-and-put. DELETE for check-and-delete. The spec does not forbid bodies in DELETE requests. (I am unsure if Jetty/Jersey will support it however.)

We have discussed the design choices earlier (refer comments in the same JIRA), Stack and Ted have voted for option # 2 (/checkandput, /checkanddelete) option. If i have to go back to option #1 then i will have to re-work most of the stuff here.

Mubarak Seyed
added a comment - 12/Jan/12 01:29 The attached file ( HBASE-4720 .trunk.v3.patch) contains changes for Andrew Purtell's code review comments.
This patch does not cover the following from Andrew's comments:
>The REST gateway does support a batch put operation, where the supplied model contains multiple rows. The request URI will contain the table name and a row key, but the row key would be ignored and should be set to something known not to exist, like "submit". (Row name in the model takes preference to whatever was supplied in the URI.) See RowResource, starting around line 160. This gives the client the option of submitting work in batch, to reduce overheads.
So optionally here you could retrieve a list of rows and process them, building a response that includes the disposition of each.
HTable.checkAndPut and HTable.checkAndDelete
API supports only one row at a time ( http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#checkAndPut(byte[ ], byte[], byte[], byte[], org.apache.hadoop.hbase.client.Put)). I don't think we need to support batch of checkAndPut and checkAndDelete.
>The URI format for requests is '/<table>/<row>/ ...' This violates that by adding, just for check-and cases, a prefix. Having a special case like that should be avoided. What about handling this in TableResource, with a query parameter? '/<table>/<row>/?check' E.g.Then you won't need CheckAndXTableResource classes. Additionally use the appropriate HTTP operations. PUT/POST for check-and-put. DELETE for check-and-delete. The spec does not forbid bodies in DELETE requests. (I am unsure if Jetty/Jersey will support it however.)
We have discussed the design choices earlier (refer comments in the same JIRA), Stack and Ted have voted for option # 2 (/checkandput, /checkanddelete) option. If i have to go back to option #1 then i will have to re-work most of the stuff here.

Ted Yu
added a comment - 23/Dec/11 02:03 The failed tests were due to NumberFormatException (see MAPREDUCE-3583 )
TestMasterReplication hung but shouldn't be caused by this JIRA.
The latest patch should be good to go.

@Mubarak Understood (re: formatter). Would suggest you not run it on total file when changing a few lines in the file only; we've not been doing a good job enforcing formatting across the code base so forcing formatting on all of a file will usually turn up loads of changes. When loads of changes, your patch gets big. Big patches are harder to get reviews on. Just some advice for next time.

stack
added a comment - 22/Dec/11 15:52 @Mubarak Understood (re: formatter). Would suggest you not run it on total file when changing a few lines in the file only; we've not been doing a good job enforcing formatting across the code base so forcing formatting on all of a file will usually turn up loads of changes. When loads of changes, your patch gets big. Big patches are harder to get reviews on. Just some advice for next time.
On the test URL making, sounds good.

Mubarak Seyed
added a comment - 22/Dec/11 00:05 I addressed most of the review comments, will wait until get more review comments from Andrew (and others).
parameter update is not being used in other places as well.
RowResoure.java:
----------------
Response update( final CellSetModel model, final boolean replace)
Response updateBinary( final byte [] message, final HttpHeaders headers,
final boolean replace)
ScannerResource.java:
---------------------
Response update( final ScannerModel model, final boolean replace,
final UriInfo uriInfo) {

Ted Yu
added a comment - 21/Dec/11 17:22 @Mubarak:
I think the rule you posted only governs comment.
I tried auto-formatting a long line which has 3 identical short statements. There was no auto-wrapping.

Suggest you not do stuff like below in future because it bloats your patch making it more susceptible to rot and besides is not related directly to what you are trying to your fix so distracting for reviewers (for the next time):

I had formatted the code using HBASE-3678eclipse_formatter_apache.xml. I apologize for messing up with format. Can you please advice on code formatting? (i believe hbase book also refers HBASE-3678 for code formatting)

Is this safe? e.g. what if row has binary characters in it? Should these be base64'd or something?

Other test methods in src/test/java/org/apache/hadoop/hbase/rest/TestRowResources.java uses the same way to build the URI (deleteRow, deleteValue, putValuePB, etc). I just copied the code from other methods.

You have some lines that are way too long.

eclipse-code-formatter.xml uses line length as 80, please advice on line length.

Mubarak Seyed
added a comment - 21/Dec/11 06:49 Thanks Stack.
Suggest you not do stuff like below in future because it bloats your patch making it more susceptible to rot and besides is not related directly to what you are trying to your fix so distracting for reviewers (for the next time):
I had formatted the code using HBASE-3678 eclipse_formatter_apache.xml . I apologize for messing up with format. Can you please advice on code formatting? (i believe hbase book also refers HBASE-3678 for code formatting)
Is this safe? e.g. what if row has binary characters in it? Should these be base64'd or something?
Other test methods in src/test/java/org/apache/hadoop/hbase/rest/TestRowResources.java uses the same way to build the URI ( deleteRow, deleteValue, putValuePB, etc). I just copied the code from other methods.
You have some lines that are way too long.
eclipse-code-formatter.xml uses line length as 80, please advice on line length.
<setting id= "org.eclipse.jdt.core.formatter.comment.line_length" value= "80" />

+ Suggest you not do stuff like below in future because it bloats your patch making it more susceptible to rot and besides is not related directly to what you are trying to your fix so distracting for reviewers (for the next time):

stack
added a comment - 21/Dec/11 04:56 On patch:
+ Suggest you not do stuff like below in future because it bloats your patch making it more susceptible to rot and besides is not related directly to what you are trying to your fix so distracting for reviewers (for the next time):
- private static final HBaseRESTTestingUtility REST_TEST_UTIL =
- new HBaseRESTTestingUtility();
+ private static final HBaseRESTTestingUtility REST_TEST_UTIL = new HBaseRESTTestingUtility();
+ Is this safe? e.g. what if row has binary characters in it? Should these be base64'd or something?
+ path.append('/');
+ path.append( "checkandput" );
+ path.append('/');
+ path.append(table);
+ path.append('/');
+ path.append(row);
+ path.append('/');
+ path.append(column);
I suppose its not needed in a test (I missed that this is test code)
You have some lines that are way too long.
Else patch looks good to me on cursory review.

For patch based on 0.90 branch, you can still upload to reviewboard.
Here was from Jonathan Hsieh:

Assuming you are using git and have a single committed patch which is your
current branch, I've found using git's 'git format-patch HEAD^' and using
the generated file with the 'hbase-git' project to be the easiest. The file
name generated should start with 0001-HBASE-xxxxx.patch.

Ted Yu
added a comment - 20/Dec/11 06:52 For patch based on 0.90 branch, you can still upload to reviewboard.
Here was from Jonathan Hsieh:
Assuming you are using git and have a single committed patch which is your
current branch, I've found using git's 'git format-patch HEAD^' and using
the generated file with the 'hbase-git' project to be the easiest. The file
name generated should start with 0001-HBASE-xxxxx.patch.

Ted Yu
added a comment - 20/Dec/11 06:50 I suggest working on patch for TRUNK.
Once the review is done and patch gets integrated, you can backport it to 0.90
This means you would run patched 0.90 build in production for a while before the backport is finalized in 0.90.6
HBASE-4508 is such an example.
Is the above procedure acceptable ?

This JIRA request was opened by my colleague as we need this feature in 0.90 branch as we have a requirement to use atomic update operations in production cluster. If this fix goes only in 0.92 and TRUNK then we need to wait a while. Can't we apply this patch under 0.90.6 branch? I can create a separate patch for 0.92 and TRUNK. Please confirm. Thanks.

Mubarak Seyed
added a comment - 20/Dec/11 06:45 This JIRA request was opened by my colleague as we need this feature in 0.90 branch as we have a requirement to use atomic update operations in production cluster. If this fix goes only in 0.92 and TRUNK then we need to wait a while. Can't we apply this patch under 0.90.6 branch? I can create a separate patch for 0.92 and TRUNK. Please confirm. Thanks.

Regarding unit test classes, i dont need separate TestXXXXResource.java as checkAndPut/checkAndDelete are related to RowResource. I just added few test methods in RowResource, is that ok? or do i need to create separate TestCheckAndPutResource.java/TestCheckAndDeleteResource.java? Please confirm. Thanks.

Mubarak Seyed
added a comment - 20/Dec/11 00:03 Regarding unit test classes, i dont need separate TestXXXXResource.java as checkAndPut/checkAndDelete are related to RowResource. I just added few test methods in RowResource, is that ok? or do i need to create separate TestCheckAndPutResource.java/TestCheckAndDeleteResource.java? Please confirm. Thanks.

Ted Yu
added a comment - 19/Dec/11 06:13 The diff was made against 0.90.5
I got the following when trying to apply to TRUNK:
1 out of 2 hunks FAILED -- saving rejects to file src/main/java/org/apache/hadoop/hbase/ rest /TableResource.java.rej
I suggest the following steps based on TRUNK:
1. continue with refactoring
2. add unit tests for the new XXResource classes (remember to add test category)
3. let Hadoop QA test the patch
Minor comment: year isn't needed for license
Thanks for your effort Mubarak.

Thanks Ted. This is what i think about refactoring the TableResource and CheckAndXXXXResource.

I can't extend from TableResource as WebContainer ends up look for @Path in TableResource after calling super(table) in CheckAndXXXXResource, which causes web container to throw incorrect URI exception.

Even if i make some of the methods (scanTransformAttrs) to static, it breaks on variables scope.

I think we need a superClass for both TableResource and CheckAndXXXXResource, which can carry most of the methods (scanTransformAttrs, getTransform, setTransform, tranform), it would look like

Mubarak Seyed
added a comment - 17/Dec/11 03:30 Thanks Ted. This is what i think about refactoring the TableResource and CheckAndXXXXResource.
I can't extend from TableResource as WebContainer ends up look for @Path in TableResource after calling super(table) in CheckAndXXXXResource, which causes web container to throw incorrect URI exception.
Even if i make some of the methods (scanTransformAttrs) to static, it breaks on variables scope.
I think we need a superClass for both TableResource and CheckAndXXXXResource, which can carry most of the methods (scanTransformAttrs, getTransform, setTransform, tranform), it would look like
public class BasicResource extends ResourceBase{
-- All the common methods --
}
public class TableResource extends BasicResource{
}
public class CheckAndXXXXResource extends BasicResource{
}
OR we can move all the common methods to ResourceBase and we can have overloaded constructor to send table name, like
public ResourceBase( String table){
}
public ResourceBase(){
}
so that all other resource classes (such as VersionResource, SchemaResource, etc) does not need to change.
I am just hacking the code with option #2 (moving methods to ResourceBase), will test it out, post them on review board then wait for comments.
Thanks,
Mubarak

Ted Yu
added a comment - 16/Dec/11 17:29 - edited The patch is of decent size, can you post on review board so that it is easier to correlate comments with code ?
+ * Copyright 2010 The Apache Software Foundation
Year isn't needed in license.
Looks like the two new CheckAndXXTableResource classes are cloned off of TableResource.java
It would be nice to reuse code from TableResource.java, such as:
void scanTransformAttrs() throws IOException {
Refactoring is always an integral part of new feature.
Thanks for your effort, Mubarak.

Option 2 sounds better. How do you pass the value to check in option 2? You can't pass it as attribute because it could be binary, right? Maybe in the body of the put, you have two values, first the one to check then the one to set if it pans out. Ditto delete?

stack
added a comment - 15/Dec/11 05:03 Option 2 sounds better. How do you pass the value to check in option 2? You can't pass it as attribute because it could be binary, right? Maybe in the body of the put, you have two values, first the one to check then the one to set if it pans out. Ditto delete?

Delete on RowResource uses only request-URI and there is no way to get CellSetModel (which holds the content of Delete object, as part of checkAndDelete API). If we go by HTTP @Delete method then how do we send the entity-body (content)? Ref: http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.7

Can we treat checkAndDelete as HTTP PUT or POST operation and send the Delete object content as entity-body?

Mubarak Seyed
added a comment - 14/Dec/11 08:10 Stack,
Option 1:
If we send ?check=true then client can send value to check and value-to-check as part of URI like
PUT or POST
/table/<row>/<column>:<qualifier> ?(/<check:true>) ?(/<value-to-check>) ?(/<timestamp>)
(with Put object as content)
Delete on RowResource uses only request-URI and there is no way to get CellSetModel (which holds the content of Delete object, as part of checkAndDelete API). If we go by HTTP @Delete method then how do we send the entity-body (content)? Ref: http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.7
Can we treat checkAndDelete as HTTP PUT or POST operation and send the Delete object content as entity-body?
so, the request URI would look like
checkAndPut: PUT (or) POST /table/<row>/<column>:<qualifier> ?(/<checkandput:true>) ?(/<value-to-check>) ?(/<timestamp>)
checkAndDelete: PUT (or) POST /table/<row>/<column>:<qualifier> ?(/<checkanddelete:true>) ?(/<value-to-check>) ?(/<timestamp>)
we can make use of put or post in RowResource and refactor the put()/post() code to handle both put/checkAndPut and delete/checkAndDelete.
Option 2:
If we dont want to refactor the RowResource code, we can create new resource (such as /checkandput and /checkanddelete) then we can append the /table/<row>/... convention