To further complicate the issue the log file in(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) isowned by mapred:mapred and the name of the file seems to indicate some otherlineage (hadoop,hadoop). I am out of my league in understanding thepermission structure for hadoop hdfs and mr. Ideas?

That is what I perceive as the problem. The hdfs file system was createdwith the user 'hdfs' owning the root ('/') but for some reason with a M/Rjob the user 'mapred' needs to have write permission to the root. I don'tknow how to satisfy both conditions. That is one reason that I relaxed thepermission to 775 so that the group would also have write permission butthat didn't seem to help.

> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?> > From: Kevin Burton [mailto:[EMAIL PROTECTED]] > Sent: Tuesday, April 30, 2013 8:31 AM> To: [EMAIL PROTECTED]> Cc: 'Mohammad Tariq'> Subject: RE: Permission problem> > That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.> > From: Mohammad Tariq [mailto:[EMAIL PROTECTED]] > Sent: Tuesday, April 30, 2013 8:20 AM> To: Kevin Burton> Subject: Re: Permission problem> > user?"ls" shows "hdfs" and the log says "mapred"..> > Warm Regards,> Tariq> https://mtariq.jux.com/> cloudfront.blogspot.com> > > On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <[EMAIL PROTECTED]> wrote:> I have relaxed it even further so now it is 775> > kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /> Found 1 items> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /> > But I still get this error:> > 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x> > > From: Mohammad Tariq [mailto:[EMAIL PROTECTED]] > Sent: Monday, April 29, 2013 5:10 PM> To: [EMAIL PROTECTED]> Subject: Re: Incompartible cluserIDS> > make it 755.> > Warm Regards,> Tariq> https://mtariq.jux.com/> cloudfront.blogspot.com> > >

To further complicate the issue the log file in(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) isowned by mapred:mapred and the name of the file seems to indicate some otherlineage (hadoop,hadoop). I am out of my league in understanding thepermission structure for hadoop hdfs and mr. Ideas?

That is what I perceive as the problem. The hdfs file system was createdwith the user 'hdfs' owning the root ('/') but for some reason with a M/Rjob the user 'mapred' needs to have write permission to the root. I don'tknow how to satisfy both conditions. That is one reason that I relaxed thepermission to 775 so that the group would also have write permission butthat didn't seem to help.

On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <<mailto:[EMAIL PROTECTED]> [EMAIL PROTECTED]> wrote:To further complicate the issue the log file in(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) isowned by mapred:mapred and the name of the file seems to indicate some otherlineage (hadoop,hadoop). I am out of my league in understanding thepermission structure for hadoop hdfs and mr. Ideas?

That is what I perceive as the problem. The hdfs file system was createdwith the user 'hdfs' owning the root ('/') but for some reason with a M/Rjob the user 'mapred' needs to have write permission to the root. I don'tknow how to satisfy both conditions. That is one reason that I relaxed thepermission to 775 so that the group would also have write permission butthat didn't seem to help.

<property> <name>mapred.system.dir</name> <value>${hadoop.tmp.dir}/mapred/system</value> <description>The directory where MapReduce stores control files. </description></property>So thats why its trying to write to /data/hadoop/tmp/hadoop-mapred/mapred/systemSo if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name} then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777 /data or you can remove the hadoop.tmp.dir from your configs and let it be set to the default value of

I am not clear on what you are suggesting to create on HDFS or the localfile system. As I understand it hadoop.tmp.dir is the local file system. Ichanged it so that the temporary files would be on a disk that has morecapacity then /tmp. So you are suggesting that I create /data/hadoop/tmp onHDFS. I already have this created.

Found 1 items

drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred

kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp

Found 1 items

drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp

When you suggest that I 'chmod -R 777 /data'. You are suggesting that I openup all the data to everyone? Isn't that a bit extreme? First /data is themount point for this drive and there are other uses for this drive thanhadoop so there are other folders. That is why there is /data/hadoop. As faras hadoop is concerned:

kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/

total 12

drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs

drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred

drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp

dfs would be where the data blocks for the hdfs file system would go, mapredwould be the folder for M/R jobs, and tmp would be temporary storage. Theseare all on the local file system. Do I have to make all of this read-writefor everyone in order to get it to work?

So thats why its trying to write to/data/hadoop/tmp/hadoop-mapred/mapred/system<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>

So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777/data or you can remove the hadoop.tmp.dir from your configs and let it beset to the default value of

<property>

<name>hadoop.tmp.dir</name>

<value>/tmp/hadoop-${user.name}</value>

<description>A base for other temporary directories.</description>

</property>

So to fix your problem you can do the above or set mapred.system.dir to/tmp/mapred/system in your mapred-site.xml.--Arpit Gupta

> I am not clear on what you are suggesting to create on HDFS or the local file system. As I understand it hadoop.tmp.dir is the local file system. I changed it so that the temporary files would be on a disk that has more capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on HDFS. I already have this created.> > Found 1 items> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred> kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp> Found 1 items> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp> > When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I open up all the data to everyone? Isn’t that a bit extreme? First /data is the mount point for this drive and there are other uses for this drive than hadoop so there are other folders. That is why there is /data/hadoop. As far as hadoop is concerned:> > kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/> total 12> drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs> drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred> drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp> > dfs would be where the data blocks for the hdfs file system would go, mapred would be the folder for M/R jobs, and tmp would be temporary storage. These are all on the local file system. Do I have to make all of this read-write for everyone in order to get it to work?> > From: Arpit Gupta [mailto:[EMAIL PROTECTED]] > Sent: Tuesday, April 30, 2013 10:01 AM> To: [EMAIL PROTECTED]> Subject: Re: Permission problem> > ah > > this is what mapred.sytem.dir defaults to> > <property>> <name>mapred.system.dir</name>> <value>${hadoop.tmp.dir}/mapred/system</value>> <description>The directory where MapReduce stores control files.> </description>> </property>> > > So thats why its trying to write to /data/hadoop/tmp/hadoop-mapred/mapred/system> > > So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name} then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777 /data or you can remove the hadoop.tmp.dir from your configs and let it be set to the default value of> > <property>> <name>hadoop.tmp.dir</name>> <value>/tmp/hadoop-${user.name}</value>> <description>A base for other temporary directories.</description>> </property>> > So to fix your problem you can do the above or set mapred.system.dir to /tmp/mapred/system in your mapred-site.xml.> > --> Arpit Gupta> Hortonworks Inc.> http://hortonworks.com/> > On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <[EMAIL PROTECTED]> wrote:> > > In core-site.xml I have:> > <property>> <name>fs.default.name</name>> <value>hdfs://devubuntu05:9000</value>> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. </description>> </property>> > In hdfs-site.xml I have> > <property>> <name>hadoop.tmp.dir</name>> <value>/data/hadoop/tmp/hadoop-${user.name}</value>> <description>Hadoop temporary folder</description>> </property>> > > From: Arpit Gupta [mailto:[EMAIL PROTECTED]] > Sent: Tuesday, April 30, 2013 9:48 AM> To: Kevin Burton> Cc: [EMAIL PROTECTED]> Subject: Re: Permission problem> > Based on the logs your system dir is set to> > hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system> > > what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?> > > > --> Arpit Gupta> Hortonworks Inc.> http://hortonworks.com/> > On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <[EMAIL PROTECTED]> wrote:>

> Kevin>> You will have create a new account if you did not have one before.>> --> Arpit>> On Apr 30, 2013, at 9:11 AM, Kevin Burton <[EMAIL PROTECTED]>> wrote:>> I don’t see a “create issue” button or tab. If I need to log in then I am> not sure what credentials I should use to log in because all I tried failed.>>>> <image001.png>>>>> *From:* Arpit Gupta [mailto:[EMAIL PROTECTED] <[EMAIL PROTECTED]>]>> *Sent:* Tuesday, April 30, 2013 11:02 AM> *To:* [EMAIL PROTECTED]> *Subject:* Re: Permission problem>>>> https://issues.apache.org/jira/browse/HADOOP and select create issue.>>>> Set the affect version to the release you are testing and add some basic> description.>>>> Here are the commands you should run.>>>> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp>>>> and>>>> sudo –u hdfs hadoop fs –chmod -R 777 /data>>>> chmod is also for the directory on hdfs.>>> --> Arpit Gupta>> Hortonworks Inc.> http://hortonworks.com/>>>> On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <[EMAIL PROTECTED]>> wrote:>>>> I am not sure how to create a jira.>>>> Again I am not sure I understand your workaround. You are suggesting that> I create /data/hadoop/tmp on HDFS like:>>>> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp>>>> I don’t think I can chmod –R 777 on /data since it is a disk and as I> indicated it is being used to store data other than that used by hadoop.> Even chmod –R 777 on /data/hadoop seems extreme as there is a dfs, mapred,> and tmp folder. Which one of these local folders need to be opened up? I> would rather not open up all folders to the world if at all possible.>>>> *From:* Arpit Gupta [mailto:[EMAIL PROTECTED]]> *Sent:* Tuesday, April 30, 2013 10:48 AM> *To:* Kevin Burton> *Cc:* [EMAIL PROTECTED]> *Subject:* Re: Permission problem>>>> It looks like hadoop.tmp.dir is being used both for local and hdfs> directories. Can you create a jira for this?>>>> What i recommended is that you create /data/hadoop/tmp on hdfs and chmod> -R /data>>>> --> Arpit Gupta>> Hortonworks Inc.> http://hortonworks.com/>>>> On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <[EMAIL PROTECTED]>> wrote:>>>>> I am not clear on what you are suggesting to create on HDFS or the local> file system. As I understand it hadoop.tmp.dir is the local file system. I> changed it so that the temporary files would be on a disk that has more> capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on> HDFS. I already have this created.>>>> Found 1 items>> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred>> kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp>> Found 1 items>> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp>>>> When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I> open up all the data to everyone? Isn’t that a bit extreme? First /data is> the mount point for this drive and there are other uses for this drive than> hadoop so there are other folders. That is why there is /data/hadoop. As> far as hadoop is concerned:>>>> kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/>> total 12>> drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs>> drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred>> drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp>>>> dfs would be where the data blocks for the hdfs file system would go,> mapred would be the folder for M/R jobs, and tmp would be temporary> storage. These are all on the local file system. Do I have to make all of> this read-write for everyone in order to get it to work?>>>> *From:* Arpit Gupta [mailto:[EMAIL PROTECTED]]> *Sent:* Tuesday, April 30, 2013 10:01 AM

NEW: Monitor These Apps!

All projects made searchable here are trademarks of the Apache Software Foundation.
Service operated by Sematext