1.- do you put Master's node <hostname> under fs.default.name in core-site.xml on the slave machines or slaves' hostnames?- do you need to run "sudo -u hdfs hadoop namenode -format" and create /tmp /var folders on the HDFS of the slave machines that will be running only DN and TT or not? Do you still need to create hadoop/dfs/name folder on the slaves?

2.In hdfs-site.xml for dfs.name.dir & dfs.data.dir properties we specify /hadoop/dfs/name /hadoop/dfs/data being local linux NFS directories by running command "mkdir -p /hadoop/dfs/data"but mapred.system.dir property is to point to HDFS and not NFS since we are running "sudo -u hdfs hadoop fs -mkdir /tmp/mapred/system"??If so and since it is exactly the same format /far/boo/baz how does hadoop know which directory is local on NFS or HDFS?Would you please kindly reconfirm.

> 1.> - do you put Master's node <hostname> under fs.default.name in core-site.xml on the slave machines or slaves' hostnames?

Master. I have a 4-node cluster, named foo1 - foo4. Myfs.default.name is hdfs://foo1.domain.com.

> - do you need to run "sudo -u hdfs hadoop namenode -format" and create /tmp /var folders on the HDFS of the slave machines that will be running only DN and TT or not? Do you still need to create hadoop/dfs/name folder on the slaves?

(The following is the simple answer, for non-HA non-federated HDFS.You'll want to get the simple example working before trying thecomplicated ones.)

No. A cluster has one namenode, running on the machine known as themaster, and the admin must "hadoop namenode -format" on that machineonly.

In my example, I ran "hadoop namenode -format" on foo1.

> 2.> In hdfs-site.xml for dfs.name.dir & dfs.data.dir properties we specify /hadoop/dfs/name /hadoop/dfs/data being local linux NFS directories by running command "mkdir -p /hadoop/dfs/data"> but mapred.system.dir property is to point to HDFS and not NFS since we are running "sudo -u hdfs hadoop fs -mkdir /tmp/mapred/system"??> If so and since it is exactly the same format /far/boo/baz how does hadoop know which directory is local on NFS or HDFS?

This is very confusing, to be sure! There are a few places wherepaths are implicitly known to be on HDFS rather than a Linuxfilesystem path. mapred.system.dir is one of those. This does meanthat given a string that starts with "/tmp/" you can't necessarilyknow whether it's a Linux path or a HDFS path without looking at thelarger context.

In the case of mapred.system.dir, the docs are the place to check;according to cluster_setup.html, mapred.system.dir is "Path on theHDFS where where the Map/Reduce framework stores system files".

The problem was in EC2 security. While I could passwordlessly ssh into another node and back I could not telnet to it due to EC2 firewall. Needed to open ports for the NN and JT. :)

Now I can see 2 DNs running "hadoop fsck " and can also -ls into NN from the slave. Sweet!!!

Is this possible to balance data over DNs without copying them with hadoop -put command? I read about bin/start-balancer.sh somewhere but cannot find it on my current hadoop installation.Besides, is balancing data over DN going to improve perfomance of MR job?

1) Have you setup password less ssh between both hosts for the user who owns the hadoop processes (or root)2) If answer to questions 1 is yes, how did you start NN, JT DN and TT3) If you started them one by one, there is no reason running a command on one node will execute it on other.On Sat, Oct 27, 2012 at 12:17 AM, Kartashov, Andy <[EMAIL PROTECTED]> wrote:> Andy, many thanks.>> I am stuck here now so please put me in the right direction.>> I successfully ran a job on a cluster on foo1 in pseudo-distributed mode and are now trying to try fully-dist'ed one.>> a. I created another instance foo2 on EC2. Installed hadoop on it and copied conf/ folder from foo1 to foo2. I created /hadoop/dfs/data folder on the local linux system on foo2.>> b. on foo1 I created file conf/slaves and added:> localhost> <hostname-of-foo2>>> At this point I cannot find an answer on what to do next.>> I started NN, DN, SNN, JT, TT on foor1. After I ran "hadoop fsck /user/bar -files -blocks -locations", it showed # of datanode as 1. I was expecting DN and TT on foo2 to be started by foo1. But it didn't happen, so I started them myself and tried the the command again. Still one DD.> I realise that boo2 has no data at this point but I could not find bin/start-balancer.sh script to help me to balance data over to DD from foo1 to foo2.>> What do I do next?>> Thanks> AK>> -----Original Message-----> From: Andy Isaacson [mailto:[EMAIL PROTECTED]]> Sent: Friday, October 26, 2012 2:21 PM> To: [EMAIL PROTECTED]> Subject: Re: cluster set-up / a few quick questions>> On Fri, Oct 26, 2012 at 9:40 AM, Kartashov, Andy <[EMAIL PROTECTED]> wrote:>> Gents,>> We're not all male here. :) I prefer "Hadoopers" or "hi all,".>>> 1.>> - do you put Master's node <hostname> under fs.default.name in core-site.xml on the slave machines or slaves' hostnames?>> Master. I have a 4-node cluster, named foo1 - foo4. My fs.default.name is hdfs://foo1.domain.com.>>> - do you need to run "sudo -u hdfs hadoop namenode -format" and create /tmp /var folders on the HDFS of the slave machines that will be running only DN and TT or not? Do you still need to create hadoop/dfs/name folder on the slaves?>> (The following is the simple answer, for non-HA non-federated HDFS.> You'll want to get the simple example working before trying the> complicated ones.)>> No. A cluster has one namenode, running on the machine known as the master, and the admin must "hadoop namenode -format" on that machine only.>> In my example, I ran "hadoop namenode -format" on foo1.>>> 2.>> In hdfs-site.xml for dfs.name.dir & dfs.data.dir properties we specify /hadoop/dfs/name /hadoop/dfs/data being local linux NFS directories by running command "mkdir -p /hadoop/dfs/data">> but mapred.system.dir property is to point to HDFS and not NFS since we are running "sudo -u hdfs hadoop fs -mkdir /tmp/mapred/system"??>> If so and since it is exactly the same format /far/boo/baz how does hadoop know which directory is local on NFS or HDFS?>> This is very confusing, to be sure! There are a few places where paths are implicitly known to be on HDFS rather than a Linux filesystem path. mapred.system.dir is one of those. This does mean that given a string that starts with "/tmp/" you can't necessarily know whether it's a Linux path or a HDFS path without looking at the larger context.

On Fri, Oct 26, 2012 at 11:47 AM, Kartashov, Andy<[EMAIL PROTECTED]> wrote:> I successfully ran a job on a cluster on foo1 in pseudo-distributed mode and are now trying to try fully-dist'ed one.>> a. I created another instance foo2 on EC2.

It seems like you're trying to use the start-dfs.sh style startupscripts to manually run a cluster on EC2. This is doable, but it'snot very easy due to the mismatch in expectations between EC2 styledeployments and start-dfs.sh. Setting up a manually started clusterrequires a bit of up-front work, and EC2 spin-up/spin-down cycles meanyou end up redoing that work frequently.

Of course, setting up a manual cluster can be a really good way tounderstand how all the parts work together, and doing it on EC2 shouldwork just fine.

> Installed hadoop on it and copied conf/ folder from foo1 to foo2. I created /hadoop/dfs/data folder on the local linux system on foo2.>> b. on foo1 I created file conf/slaves and added:> localhost> <hostname-of-foo2>

I'd strongly recommend being consistent with the naming, don't mix"localhost" and DNS names. EC2 has "ec2.internal" in /etc/resolv.confby default, so you can "ping ip-10-42-120-3" and it should work justfine. Then make conf/master list your first host by name, and makeconf/slaves list all your hosts by name. Note that for small clusters,running a DN and a NN on a single host is an acceptable compromise andworks OK.

You also should make sure that your user account can ssh to all the nodes:% for h in $(cat conf/slaves); do ssh -oStrictHostKeyChecking=no $hhostname; done

- answer "yes" to any "allow untrusted certificate" messages - if you get "permission denied" messages you'll need to set up theauthorized_keys properly. - after this loop succeeds you should be able to run it again and geta clean list of hostnames.

> At this point I cannot find an answer on what to do next.>> I started NN, DN, SNN, JT, TT on foor1. After I ran "hadoop fsck /user/bar -files -blocks -locations", it showed # of datanode as 1. I was expecting DN and TT on foo2 to be started by foo1. But it didn’t happen, so I started them myself and tried the the command again. Still one DD.

You don't need to start the daemons individually, and doing so is verydifficult to get right. I virtually never do so -- I use thestart-dfs.sh script to start the daemons (NN, DN, TT, etc). The"master" and "slaves" config files are parsed by the start-*.shscripts, not by the daemons themselves. And, the daemons don't startthemselves -- for a manual cluster, the start-*.sh scripts areresponsible. (In a production deployment such as CDH, there is a/etc/init.d script which is managed by the distro packaging to startand manage the daemons.)

you should definitely give a try to whirr for hadoop on aws. It solvesall issues and works smoothly.

Thanks,nitin

On Sat, Oct 27, 2012 at 1:25 AM, Kartashov, Andy <[EMAIL PROTECTED]> wrote:> Hadoopers,>> The problem was in EC2 security. While I could passwordlessly ssh into another node and back I could not telnet to it due to EC2 firewall. Needed to open ports for the NN and JT. :)>> Now I can see 2 DNs running "hadoop fsck " and can also -ls into NN from the slave. Sweet!!!>> Is this possible to balance data over DNs without copying them with hadoop -put command? I read about bin/start-balancer.sh somewhere but cannot find it on my current hadoop installation.> Besides, is balancing data over DN going to improve perfomance of MR job?>> Cheers,> Happy Hadooping.>> -----Original Message-----> From: Nitin Pawar [mailto:[EMAIL PROTECTED]]> Sent: Friday, October 26, 2012 3:18 PM> To: [EMAIL PROTECTED]> Subject: Re: cluster set-up / a few quick questions>> questions>> 1) Have you setup password less ssh between both hosts for the user who owns the hadoop processes (or root)> 2) If answer to questions 1 is yes, how did you start NN, JT DN and TT> 3) If you started them one by one, there is no reason running a command on one node will execute it on other.>>> On Sat, Oct 27, 2012 at 12:17 AM, Kartashov, Andy <[EMAIL PROTECTED]> wrote:>> Andy, many thanks.>>>> I am stuck here now so please put me in the right direction.>>>> I successfully ran a job on a cluster on foo1 in pseudo-distributed mode and are now trying to try fully-dist'ed one.>>>> a. I created another instance foo2 on EC2. Installed hadoop on it and copied conf/ folder from foo1 to foo2. I created /hadoop/dfs/data folder on the local linux system on foo2.>>>> b. on foo1 I created file conf/slaves and added:>> localhost>> <hostname-of-foo2>>>>> At this point I cannot find an answer on what to do next.>>>> I started NN, DN, SNN, JT, TT on foor1. After I ran "hadoop fsck /user/bar -files -blocks -locations", it showed # of datanode as 1. I was expecting DN and TT on foo2 to be started by foo1. But it didn't happen, so I started them myself and tried the the command again. Still one DD.>> I realise that boo2 has no data at this point but I could not find bin/start-balancer.sh script to help me to balance data over to DD from foo1 to foo2.>>>> What do I do next?>>>> Thanks>> AK>>>> -----Original Message----->> From: Andy Isaacson [mailto:[EMAIL PROTECTED]]>> Sent: Friday, October 26, 2012 2:21 PM>> To: [EMAIL PROTECTED]>> Subject: Re: cluster set-up / a few quick questions>>>> On Fri, Oct 26, 2012 at 9:40 AM, Kartashov, Andy <[EMAIL PROTECTED]> wrote:>>> Gents,>>>> We're not all male here. :) I prefer "Hadoopers" or "hi all,".>>>>> 1.>>> - do you put Master's node <hostname> under fs.default.name in core-site.xml on the slave machines or slaves' hostnames?>>>> Master. I have a 4-node cluster, named foo1 - foo4. My fs.default.name is hdfs://foo1.domain.com.>>>>> - do you need to run "sudo -u hdfs hadoop namenode -format" and create /tmp /var folders on the HDFS of the slave machines that will be running only DN and TT or not? Do you still need to create hadoop/dfs/name folder on the slaves?>>>> (The following is the simple answer, for non-HA non-federated HDFS.>> You'll want to get the simple example working before trying the>> complicated ones.)>>>> No. A cluster has one namenode, running on the machine known as the master, and the admin must "hadoop namenode -format" on that machine only.>>>> In my example, I ran "hadoop namenode -format" on foo1.>>>>> 2.>>> In hdfs-site.xml for dfs.name.dir & dfs.data.dir properties we specify /hadoop/dfs/name /hadoop/dfs/data being local linux NFS directories by running command "mkdir -p /hadoop/dfs/data">>> but mapred.system.dir property is to point to HDFS and not NFS since we are running "sudo -u hdfs hadoop fs -mkdir /tmp/mapred/system"??

While I did not find start-balancer.sh script on my machine I successfully utilized the following command:

"$hadoop balancer -threshold 10" and achieved the exact same result.

One issue remains. Controlling start/stop daemons of the slaves through the master. Somehow I don't have dfs-start/stop.sh nor dfs-start-all.sh script on my machine either. For now, I am starting dfs and mapreduce daemons on each slave manually and individually.

Can someone post the content of the script star-all.sh so I could utilize it for my environment.

The problem was in EC2 security. While I could passwordlessly ssh into another node and back I could not telnet to it due to EC2 firewall. Needed to open ports for the NN and JT. :)

Now I can see 2 DNs running "hadoop fsck " and can also -ls into NN from the slave. Sweet!!!

Is this possible to balance data over DNs without copying them with hadoop -put command? I read about bin/start-balancer.sh somewhere but cannot find it on my current hadoop installation.Besides, is balancing data over DN going to improve perfomance of MR job?

1) Have you setup password less ssh between both hosts for the user who owns the hadoop processes (or root)2) If answer to questions 1 is yes, how did you start NN, JT DN and TT3) If you started them one by one, there is no reason running a command on one node will execute it on other.On Sat, Oct 27, 2012 at 12:17 AM, Kartashov, Andy <[EMAIL PROTECTED]> wrote:> Andy, many thanks.>> I am stuck here now so please put me in the right direction.>> I successfully ran a job on a cluster on foo1 in pseudo-distributed mode and are now trying to try fully-dist'ed one.>> a. I created another instance foo2 on EC2. Installed hadoop on it and copied conf/ folder from foo1 to foo2. I created /hadoop/dfs/data folder on the local linux system on foo2.>> b. on foo1 I created file conf/slaves and added:> localhost> <hostname-of-foo2>>> At this point I cannot find an answer on what to do next.>> I started NN, DN, SNN, JT, TT on foor1. After I ran "hadoop fsck /user/bar -files -blocks -locations", it showed # of datanode as 1. I was expecting DN and TT on foo2 to be started by foo1. But it didn't happen, so I started them myself and tried the the command again. Still one DD.> I realise that boo2 has no data at this point but I could not find bin/start-balancer.sh script to help me to balance data over to DD from foo1 to foo2.>> What do I do next?>> Thanks> AK>> -----Original Message-----> From: Andy Isaacson [mailto:[EMAIL PROTECTED]]> Sent: Friday, October 26, 2012 2:21 PM> To: [EMAIL PROTECTED]> Subject: Re: cluster set-up / a few quick questions>> On Fri, Oct 26, 2012 at 9:40 AM, Kartashov, Andy <[EMAIL PROTECTED]> wrote:>> Gents,>> We're not all male here. :) I prefer "Hadoopers" or "hi all,".>>> 1.>> - do you put Master's node <hostname> under fs.default.name in core-site.xml on the slave machines or slaves' hostnames?>> Master. I have a 4-node cluster, named foo1 - foo4. My fs.default.name is hdfs://foo1.domain.com.>>> - do you need to run "sudo -u hdfs hadoop namenode -format" and create /tmp /var folders on the HDFS of the slave machines that will be running only DN and TT or not? Do you still need to create hadoop/dfs/name folder on the slaves?>> (The following is the simple answer, for non-HA non-federated HDFS.> You'll want to get the simple example working before trying the> complicated ones.)>> No. A cluster has one namenode, running on the machine known as the master, and the admin must "hadoop namenode -format" on that machine only.

> People,>> While I did not find start-balancer.sh script on my machine I successfully> utilized the following command:>> "$hadoop balancer -threshold 10" and achieved the exact same result.>> One issue remains. Controlling start/stop daemons of the slaves through> the master. Somehow I don't have dfs-start/stop.sh nor dfs-start-all.sh> script on my machine either. For now, I am starting dfs and mapreduce> daemons on each slave manually and individually.>> Can someone post the content of the script star-all.sh so I could utilize> it for my environment.>> Thanks,> AK47>>> -----Original Message-----> From: Kartashov, Andy> Sent: Friday, October 26, 2012 3:56 PM> To: [EMAIL PROTECTED]> Subject: RE: cluster set-up / a few quick questions - SOLVED>> Hadoopers,>> The problem was in EC2 security. While I could passwordlessly ssh into> another node and back I could not telnet to it due to EC2 firewall. Needed> to open ports for the NN and JT. :)>> Now I can see 2 DNs running "hadoop fsck " and can also -ls into NN from> the slave. Sweet!!!>> Is this possible to balance data over DNs without copying them with> hadoop -put command? I read about bin/start-balancer.sh somewhere but> cannot find it on my current hadoop installation.> Besides, is balancing data over DN going to improve perfomance of MR job?>> Cheers,> Happy Hadooping.>> -----Original Message-----> From: Nitin Pawar [mailto:[EMAIL PROTECTED]]> Sent: Friday, October 26, 2012 3:18 PM> To: [EMAIL PROTECTED]> Subject: Re: cluster set-up / a few quick questions>> questions>> 1) Have you setup password less ssh between both hosts for the user who> owns the hadoop processes (or root)> 2) If answer to questions 1 is yes, how did you start NN, JT DN and TT> 3) If you started them one by one, there is no reason running a command on> one node will execute it on other.>>> On Sat, Oct 27, 2012 at 12:17 AM, Kartashov, Andy <[EMAIL PROTECTED]>> wrote:> > Andy, many thanks.> >> > I am stuck here now so please put me in the right direction.> >> > I successfully ran a job on a cluster on foo1 in pseudo-distributed mode> and are now trying to try fully-dist'ed one.> >> > a. I created another instance foo2 on EC2. Installed hadoop on it and> copied conf/ folder from foo1 to foo2. I created /hadoop/dfs/data folder> on the local linux system on foo2.> >> > b. on foo1 I created file conf/slaves and added:> > localhost> > <hostname-of-foo2>> >> > At this point I cannot find an answer on what to do next.> >> > I started NN, DN, SNN, JT, TT on foor1. After I ran "hadoop fsck> /user/bar -files -blocks -locations", it showed # of datanode as 1. I was> expecting DN and TT on foo2 to be started by foo1. But it didn't happen, so> I started them myself and tried the the command again. Still one DD.> > I realise that boo2 has no data at this point but I could not find> bin/start-balancer.sh script to help me to balance data over to DD from> foo1 to foo2.> >> > What do I do next?> >> > Thanks> > AK> >> > -----Original Message-----> > From: Andy Isaacson [mailto:[EMAIL PROTECTED]]> > Sent: Friday, October 26, 2012 2:21 PM> > To: [EMAIL PROTECTED]> > Subject: Re: cluster set-up / a few quick questions> >> > On Fri, Oct 26, 2012 at 9:40 AM, Kartashov, Andy <[EMAIL PROTECTED]>> wrote:> >> Gents,> >> > We're not all male here. :) I prefer "Hadoopers" or "hi all,".> >> >> 1.> >> - do you put Master's node <hostname> under fs.default.name in> core-site.xml on the slave machines or slaves' hostnames?> >> > Master. I have a 4-node cluster, named foo1 - foo4. My fs.default.nameis hdfs://> foo1.domain.com.> >> >> - do you need to run "sudo -u hdfs hadoop namenode -format" and create

Nitin Pawar

NEW: Monitor These Apps!

All projects made searchable here are trademarks of the Apache Software Foundation.
Service operated by Sematext