hadoop-hdfs-issues mailing list archives

[jira] [Commented] (HDFS-3148) The client should be able to use multiple local interfaces for data transfer

Date

Mon, 02 Apr 2012 23:37:23 GMT

[ https://issues.apache.org/jira/browse/HDFS-3148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13244834#comment-13244834
]
Hudson commented on HDFS-3148:
------------------------------
Integrated in Hadoop-Common-trunk-Commit #1975 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1975/])
HDFS-3148. The client should be able to use multiple local interfaces for data transfer.
Contributed by Eli Collins (Revision 1308617)
HDFS-3148. The client should be able to use multiple local interfaces for data transfer. Contributed
by Eli Collins (Revision 1308614)
Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308617
Files :
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308614
Files :
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/FileAppendTest4.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend4.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationDelete.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPipelines.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRenameWhileOpen.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java
> The client should be able to use multiple local interfaces for data transfer
> ----------------------------------------------------------------------------
>
> Key: HDFS-3148
> URL: https://issues.apache.org/jira/browse/HDFS-3148
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: hdfs client
> Reporter: Eli Collins
> Assignee: Eli Collins
> Fix For: 1.1.0, 2.0.0
>
> Attachments: hdfs-3148-b1.txt, hdfs-3148-b1.txt, hdfs-3148.txt, hdfs-3148.txt,
hdfs-3148.txt
>
>
> HDFS-3147 covers using multiple interfaces on the server (Datanode) side. Clients should
also be able to utilize multiple *local* interfaces for outbound connections instead of always
using the interface for the local hostname. This can be accomplished with a new configuration
parameter ({{dfs.client.local.interfaces}}) that accepts a list of interfaces the client should
use. Acceptable configuration values are the same as the {{dfs.datanode.available.interfaces}}
parameter. The client binds its socket to a specific interface, which enables outbound traffic
to use that interface. Binding the client socket to a specific address is not sufficient to
ensure egress traffic uses that interface. Eg if multiple interfaces are on the same subnet
the host requires IP rules that use the source address (which bind sets) to select the destination
interface. The SO_BINDTODEVICE socket option could be used to select a specific interface
for the connection instead, however it requires JNI (is not in Java's SocketOptions) and root
access, which we don't want to require clients have.
> Like HDFS-3147, the client can use multiple local interfaces for data transfer. Since
the client already cache their connections to DNs choosing a local interface at random seems
like a good policy. Users can also pin a specific client to a specific interface by specifying
just that interface in dfs.client.local.interfaces.
> This change was discussed in HADOOP-6210 a while back, and is actually useful/independent
of the other HDFS-3140 changes.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira