Details

Description

dfs#concat() API doesn't resolve the /.reserved/raw path. For example, if the input paths of the form /.reserved/raw/ezone/a then this API doesn't work properly. The idea of this jira to discuss this behavior and handle accordingly.

Hi Rakesh, Thanks for reporting. I am not sure user will have real need of concatenating .reservedPaths.
In the concat javadoc of Fsnamesystem says " * This does not support ".inodes" relative path"Yi Liu do you know this comment means, we don't support .reserved paths?

if so, what error you are getting when you try with .reservedPaths, Rakesh R ?

Uma Maheswara Rao G
added a comment - 02/Dec/15 10:09 Hi Rakesh, Thanks for reporting. I am not sure user will have real need of concatenating .reservedPaths.
In the concat javadoc of Fsnamesystem says " * This does not support ".inodes" relative path"
Yi Liu do you know this comment means, we don't support .reserved paths?
if so, what error you are getting when you try with .reservedPaths, Rakesh R ?

It is throwing FileNotFoundException. But I think it is not conveying proper message to the caller about the " This does not support ".inodes" relative path".

org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does not exist: /ezone/trg
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:73)
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:63)
at org.apache.hadoop.hdfs.server.namenode.FSDirConcatOp.verifyTargetFile(FSDirConcatOp.java:112)
at org.apache.hadoop.hdfs.server.namenode.FSDirConcatOp.concat(FSDirConcatOp.java:83)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concat(FSNamesystem.java:1831)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.concat(NameNodeRpcServer.java:951)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.concat(ClientNamenodeProtocolServerSideTranslatorPB.java:572)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:22107)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:637)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2305)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2300)
at org.apache.hadoop.ipc.Client.call(Client.java:1448)
at org.apache.hadoop.ipc.Client.call(Client.java:1385)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy20.concat(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.concat(ClientNamenodeProtocolTranslatorPB.java:506)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:255)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy25.concat(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.concat(DFSClient.java:1468)
at org.apache.hadoop.hdfs.DistributedFileSystem.concat(DistributedFileSystem.java:584)

Rakesh R
added a comment - 02/Dec/15 10:37 Thanks Uma Maheswara Rao G for the comments.
if so, what error you are getting when you try with .reservedPaths
It is throwing FileNotFoundException . But I think it is not conveying proper message to the caller about the " This does not support ".inodes" relative path".
org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does not exist: /ezone/trg
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:73)
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:63)
at org.apache.hadoop.hdfs.server.namenode.FSDirConcatOp.verifyTargetFile(FSDirConcatOp.java:112)
at org.apache.hadoop.hdfs.server.namenode.FSDirConcatOp.concat(FSDirConcatOp.java:83)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concat(FSNamesystem.java:1831)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.concat(NameNodeRpcServer.java:951)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.concat(ClientNamenodeProtocolServerSideTranslatorPB.java:572)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:22107)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:637)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2305)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2300)
at org.apache.hadoop.ipc.Client.call(Client.java:1448)
at org.apache.hadoop.ipc.Client.call(Client.java:1385)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy20.concat(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.concat(ClientNamenodeProtocolTranslatorPB.java:506)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:255)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy25.concat(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.concat(DFSClient.java:1468)
at org.apache.hadoop.hdfs.DistributedFileSystem.concat(DistributedFileSystem.java:584)

Thanks Uma Maheswara Rao G for the suggestions. If this is designed intentionally, then IMHO would be good to throw InvalidPathException to the callers instead of FNFException. How about something like,

Rakesh R
added a comment - 04/Dec/15 11:16 Thanks Uma Maheswara Rao G for the suggestions. If this is designed intentionally, then IMHO would be good to throw InvalidPathException to the callers instead of FNFException. How about something like,
if (FSDirectory.isReservedRawName(target) && FSDirectory.isReservedInodesName(target)) {
throw new InvalidPathException(target);
}
// Also, will validate srcs...

Uma Maheswara Rao G
added a comment - 09/Dec/15 07:16
InvalidPathException
Hi Rakesh, I think generally path is still valid path but this API does not support. So, do we need to return IOException and say the message that we don't support this path?