I am using one of the old legacy version (0.20) of hadoop for our cluster. We have scheduled for an upgrade to the newer version within a couple of months, but I would like to understand a couple of things before moving towards the upgrade plan.

We have about 200 datanodes and some of them have larger storage than others. The storage for the datanodes varies between 12 TB to 72 TB.

We found that the disk-used percentage is not symmetric through all the datanodes. For larger storage nodes the percentage of disk-space used is much lower than that of other nodes with smaller storage space. In larger storage nodes the percentage of used disk space varies, but on average about 30-50%. For the smaller storage nodes this number is as high as 99.9%. Is this expected ? If so, then we are not using a lot of the disk space effectively. Is this solved in a future release ?

If no, I would like to know if there are any checks/debugs that one can do to find an improvement with the current version or upgrading hadoop should solve this problem.

Maybe you need to modify the rackware script to make the rack balance, ie, all the racks are the same size, on rack by 6 small nodes, one rack by 1 large nodes.P.S.you need to reboot the cluster for rackware script modify.

于 2013/3/19 7:17, Bertrand Dechoux 写道:> And by active, it means that it does actually stops by itself? Else it > might mean that the throttling/limit might be an issue with regard to > the data volume or velocity.>> What threshold is used?>> About the small and big datanodes, how are they distributed with > regards to racks?> About files, how is used the replication factor(s) and block size(s)?>> Surely trivial questions again.>> Bertrand>> On Mon, Mar 18, 2013 at 10:46 PM, Tapas Sarangi > <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:>> Hi,>> Sorry about that, had it written, but thought it was obvious.> Yes, balancer is active and running on the namenode.>> -Tapas>> On Mar 18, 2013, at 4:43 PM, Bertrand Dechoux <[EMAIL PROTECTED]> <mailto:[EMAIL PROTECTED]>> wrote:>>> Hi,>>>> It is not explicitly said but did you use the balancer?>> http://hadoop.apache.org/docs/r1.0.4/commands_manual.html#balancer>>>> Regards>>>> Bertrand>>>> On Mon, Mar 18, 2013 at 10:01 PM, Tapas Sarangi>> <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:>>>> Hello,>>>> I am using one of the old legacy version (0.20) of hadoop for>> our cluster. We have scheduled for an upgrade to the newer>> version within a couple of months, but I would like to>> understand a couple of things before moving towards the>> upgrade plan.>>>> We have about 200 datanodes and some of them have larger>> storage than others. The storage for the datanodes varies>> between 12 TB to 72 TB.>>>> We found that the disk-used percentage is not symmetric>> through all the datanodes. For larger storage nodes the>> percentage of disk-space used is much lower than that of>> other nodes with smaller storage space. In larger storage>> nodes the percentage of used disk space varies, but on>> average about 30-50%. For the smaller storage nodes this>> number is as high as 99.9%. Is this expected ? If so, then we>> are not using a lot of the disk space effectively. Is this>> solved in a future release ?>>>> If no, I would like to know if there are any checks/debugs>> that one can do to find an improvement with the current>> version or upgrading hadoop should solve this problem.>>>> I am happy to provide additional information if needed.>>>> Thanks for any help.>>>> -Tapas>>>>>>> -- > Bertrand Dechoux

> And by active, it means that it does actually stops by itself?> Else it might mean that the throttling/limit might be an issue with regard to the data volume or velocity.>

This "else" is probably what's happening. I just checked the logs. Its active almost all the time. > What threshold is used?

Don't know what's this. How can I find out ?

> > About the small and big datanodes, how are they distributed with regards to racks?

We haven't considered rack awareness for our cluster. It is currently considered as one rack. I am going through some docs to figure out how I can implement this after the upgrade.

> About files, how is used the replication factor(s) and block size(s)?

This is 2.

> > Surely trivial questions again.>

Not really :)

Thanks-Tapas> Bertrand> > On Mon, Mar 18, 2013 at 10:46 PM, Tapas Sarangi <[EMAIL PROTECTED]> wrote:> Hi,> > Sorry about that, had it written, but thought it was obvious. > Yes, balancer is active and running on the namenode.> > -Tapas> > On Mar 18, 2013, at 4:43 PM, Bertrand Dechoux <[EMAIL PROTECTED]> wrote:> >> Hi,>> >> It is not explicitly said but did you use the balancer?>> http://hadoop.apache.org/docs/r1.0.4/commands_manual.html#balancer>> >> Regards>> >> Bertrand>> >> On Mon, Mar 18, 2013 at 10:01 PM, Tapas Sarangi <[EMAIL PROTECTED]> wrote:>> Hello,>> >> I am using one of the old legacy version (0.20) of hadoop for our cluster. We have scheduled for an upgrade to the newer version within a couple of months, but I would like to understand a couple of things before moving towards the upgrade plan.>> >> We have about 200 datanodes and some of them have larger storage than others. The storage for the datanodes varies between 12 TB to 72 TB.>> >> We found that the disk-used percentage is not symmetric through all the datanodes. For larger storage nodes the percentage of disk-space used is much lower than that of other nodes with smaller storage space. In larger storage nodes the percentage of used disk space varies, but on average about 30-50%. For the smaller storage nodes this number is as high as 99.9%. Is this expected ? If so, then we are not using a lot of the disk space effectively. Is this solved in a future release ?>> >> If no, I would like to know if there are any checks/debugs that one can do to find an improvement with the current version or upgrading hadoop should solve this problem.>> >> I am happy to provide additional information if needed.>> >> Thanks for any help.>> >> -Tapas>> > > > > > -- > Bertrand Dechoux

What do you mean that the balancer is always active? It is to be usedas a tool and it exits once it balances in a specific run (loops untilit does, but always exits at end). The balancer does balance based onusage percentage so that is what you're probably looking for/missing.

On Tue, Mar 19, 2013 at 6:56 AM, Tapas Sarangi <[EMAIL PROTECTED]> wrote:> Hi,>> On Mar 18, 2013, at 8:21 PM, 李洪忠 <[EMAIL PROTECTED]> wrote:>> Maybe you need to modify the rackware script to make the rack balance, ie,> all the racks are the same size, on rack by 6 small nodes, one rack by 1> large nodes.> P.S.> you need to reboot the cluster for rackware script modify.>>> Like I mentioned earlier in my reply to Bertrand, we haven't considered rack> awareness for the cluster, currently it is considered as just one rack. Can> that be the problem ? I don't know…>> -Tapas>>>> 于 2013/3/19 7:17, Bertrand Dechoux 写道:>> And by active, it means that it does actually stops by itself? Else it might> mean that the throttling/limit might be an issue with regard to the data> volume or velocity.>> What threshold is used?>> About the small and big datanodes, how are they distributed with regards to> racks?> About files, how is used the replication factor(s) and block size(s)?>> Surely trivial questions again.>> Bertrand>> On Mon, Mar 18, 2013 at 10:46 PM, Tapas Sarangi <[EMAIL PROTECTED]>> wrote:>>>> Hi,>>>> Sorry about that, had it written, but thought it was obvious.>> Yes, balancer is active and running on the namenode.>>>> -Tapas>>>> On Mar 18, 2013, at 4:43 PM, Bertrand Dechoux <[EMAIL PROTECTED]> wrote:>>>> Hi,>>>> It is not explicitly said but did you use the balancer?>> http://hadoop.apache.org/docs/r1.0.4/commands_manual.html#balancer>>>> Regards>>>> Bertrand>>>> On Mon, Mar 18, 2013 at 10:01 PM, Tapas Sarangi <[EMAIL PROTECTED]>>> wrote:>>>>>> Hello,>>>>>> I am using one of the old legacy version (0.20) of hadoop for our>>> cluster. We have scheduled for an upgrade to the newer version within a>>> couple of months, but I would like to understand a couple of things before>>> moving towards the upgrade plan.>>>>>> We have about 200 datanodes and some of them have larger storage than>>> others. The storage for the datanodes varies between 12 TB to 72 TB.>>>>>> We found that the disk-used percentage is not symmetric through all the>>> datanodes. For larger storage nodes the percentage of disk-space used is>>> much lower than that of other nodes with smaller storage space. In larger>>> storage nodes the percentage of used disk space varies, but on average about>>> 30-50%. For the smaller storage nodes this number is as high as 99.9%. Is>>> this expected ? If so, then we are not using a lot of the disk space>>> effectively. Is this solved in a future release ?>>>>>> If no, I would like to know if there are any checks/debugs that one can>>> do to find an improvement with the current version or upgrading hadoop>>> should solve this problem.>>>>>> I am happy to provide additional information if needed.>>>>>> Thanks for any help.>>>>>> -Tapas>>>>>>>>> --> Bertrand Dechoux>>>

node A=12TBnode B=72TBHow many A nodes and B from 200 do you have?If you have more B than A you can deactivate A,clear it and apply again.I suppose that cluster about 3-5 Tb.Run balancer with threshold 0.2 or 0.1.

Different servers in one rack is bad idea.You should rebuild cluster withmultiple racks.

2013/3/19 Tapas Sarangi <[EMAIL PROTECTED]>

> Hello,>> I am using one of the old legacy version (0.20) of hadoop for our cluster.> We have scheduled for an upgrade to the newer version within a couple of> months, but I would like to understand a couple of things before moving> towards the upgrade plan.>> We have about 200 datanodes and some of them have larger storage than> others. The storage for the datanodes varies between 12 TB to 72 TB.>> We found that the disk-used percentage is not symmetric through all the> datanodes. For larger storage nodes the percentage of disk-space used is> much lower than that of other nodes with smaller storage space. In larger> storage nodes the percentage of used disk space varies, but on average> about 30-50%. For the smaller storage nodes this number is as high as> 99.9%. Is this expected ? If so, then we are not using a lot of the disk> space effectively. Is this solved in a future release ?>> If no, I would like to know if there are any checks/debugs that one can> do to find an improvement with the current version or upgrading hadoop> should solve this problem.>> I am happy to provide additional information if needed.>> Thanks for any help.>> -Tapas>>

meaning, the same process is active for a long time. The process that starts may not be exiting at all. We have a cron job set to run it every 10 minutes, but that's not in effect because the process may never exit.> It is to be used> as a tool and it exits once it balances in a specific run (loops until> it does, but always exits at end). The balancer does balance based on> usage percentage so that is what you're probably looking for/missing.>

May be. How does the balancer look for the usage percentage ?

-Tapas> On Tue, Mar 19, 2013 at 6:56 AM, Tapas Sarangi <[EMAIL PROTECTED]> wrote:>> Hi,>> >> On Mar 18, 2013, at 8:21 PM, 李洪忠 <[EMAIL PROTECTED]> wrote:>> >> Maybe you need to modify the rackware script to make the rack balance, ie,>> all the racks are the same size, on rack by 6 small nodes, one rack by 1>> large nodes.>> P.S.>> you need to reboot the cluster for rackware script modify.>> >> >> Like I mentioned earlier in my reply to Bertrand, we haven't considered rack>> awareness for the cluster, currently it is considered as just one rack. Can>> that be the problem ? I don't know…>> >> -Tapas>> >> >> >> 于 2013/3/19 7:17, Bertrand Dechoux 写道:>> >> And by active, it means that it does actually stops by itself? Else it might>> mean that the throttling/limit might be an issue with regard to the data>> volume or velocity.>> >> What threshold is used?>> >> About the small and big datanodes, how are they distributed with regards to>> racks?>> About files, how is used the replication factor(s) and block size(s)?>> >> Surely trivial questions again.>> >> Bertrand>> >> On Mon, Mar 18, 2013 at 10:46 PM, Tapas Sarangi <[EMAIL PROTECTED]>>> wrote:>>> >>> Hi,>>> >>> Sorry about that, had it written, but thought it was obvious.>>> Yes, balancer is active and running on the namenode.>>> >>> -Tapas>>> >>> On Mar 18, 2013, at 4:43 PM, Bertrand Dechoux <[EMAIL PROTECTED]> wrote:>>> >>> Hi,>>> >>> It is not explicitly said but did you use the balancer?>>> http://hadoop.apache.org/docs/r1.0.4/commands_manual.html#balancer>>> >>> Regards>>> >>> Bertrand>>> >>> On Mon, Mar 18, 2013 at 10:01 PM, Tapas Sarangi <[EMAIL PROTECTED]>>>> wrote:>>>> >>>> Hello,>>>> >>>> I am using one of the old legacy version (0.20) of hadoop for our>>>> cluster. We have scheduled for an upgrade to the newer version within a>>>> couple of months, but I would like to understand a couple of things before>>>> moving towards the upgrade plan.>>>> >>>> We have about 200 datanodes and some of them have larger storage than>>>> others. The storage for the datanodes varies between 12 TB to 72 TB.>>>> >>>> We found that the disk-used percentage is not symmetric through all the>>>> datanodes. For larger storage nodes the percentage of disk-space used is>>>> much lower than that of other nodes with smaller storage space. In larger>>>> storage nodes the percentage of used disk space varies, but on average about>>>> 30-50%. For the smaller storage nodes this number is as high as 99.9%. Is>>>> this expected ? If so, then we are not using a lot of the disk space>>>> effectively. Is this solved in a future release ?>>>> >>>> If no, I would like to know if there are any checks/debugs that one can>>>> do to find an improvement with the current version or upgrading hadoop>>>> should solve this problem.>>>> >>>> I am happy to provide additional information if needed.>>>> >>>> Thanks for any help.>>>> >>>> -Tapas>>>> >>> >> >> >> >> -->> Bertrand Dechoux>> >> >> > > > > -- > Harsh J

> > On Mar 18, 2013, at 11:50 PM, Harsh J <[EMAIL PROTECTED]> wrote:> >> What do you mean that the balancer is always active?> > meaning, the same process is active for a long time. The process that starts may not be exiting at all. We have a cron job set to run it every 10 minutes, but that's not in effect because the process may never exit.> > >> It is to be used>> as a tool and it exits once it balances in a specific run (loops until>> it does, but always exits at end). The balancer does balance based on>> usage percentage so that is what you're probably looking for/missing.>> > > May be. How does the balancer look for the usage percentage ?> > -Tapas> > >> On Tue, Mar 19, 2013 at 6:56 AM, Tapas Sarangi <[EMAIL PROTECTED]> wrote:>>> Hi,>>> >>> On Mar 18, 2013, at 8:21 PM, 李洪忠 <[EMAIL PROTECTED]> wrote:>>> >>> Maybe you need to modify the rackware script to make the rack balance, ie,>>> all the racks are the same size, on rack by 6 small nodes, one rack by 1>>> large nodes.>>> P.S.>>> you need to reboot the cluster for rackware script modify.>>> >>> >>> Like I mentioned earlier in my reply to Bertrand, we haven't considered rack>>> awareness for the cluster, currently it is considered as just one rack. Can>>> that be the problem ? I don't know…>>> >>> -Tapas>>> >>> >>> >>> 于 2013/3/19 7:17, Bertrand Dechoux 写道:>>> >>> And by active, it means that it does actually stops by itself? Else it might>>> mean that the throttling/limit might be an issue with regard to the data>>> volume or velocity.>>> >>> What threshold is used?>>> >>> About the small and big datanodes, how are they distributed with regards to>>> racks?>>> About files, how is used the replication factor(s) and block size(s)?>>> >>> Surely trivial questions again.>>> >>> Bertrand>>> >>> On Mon, Mar 18, 2013 at 10:46 PM, Tapas Sarangi <[EMAIL PROTECTED]>>>> wrote:>>>> >>>> Hi,>>>> >>>> Sorry about that, had it written, but thought it was obvious.>>>> Yes, balancer is active and running on the namenode.>>>> >>>> -Tapas>>>> >>>> On Mar 18, 2013, at 4:43 PM, Bertrand Dechoux <[EMAIL PROTECTED]> wrote:>>>> >>>> Hi,>>>> >>>> It is not explicitly said but did you use the balancer?>>>> http://hadoop.apache.org/docs/r1.0.4/commands_manual.html#balancer>>>> >>>> Regards>>>> >>>> Bertrand>>>> >>>> On Mon, Mar 18, 2013 at 10:01 PM, Tapas Sarangi <[EMAIL PROTECTED]>>>>> wrote:>>>>> >>>>> Hello,>>>>> >>>>> I am using one of the old legacy version (0.20) of hadoop for our>>>>> cluster. We have scheduled for an upgrade to the newer version within a>>>>> couple of months, but I would like to understand a couple of things before>>>>> moving towards the upgrade plan.>>>>> >>>>> We have about 200 datanodes and some of them have larger storage than>>>>> others. The storage for the datanodes varies between 12 TB to 72 TB.>>>>> >>>>> We found that the disk-used percentage is not symmetric through all the>>>>> datanodes. For larger storage nodes the percentage of disk-space used is>>>>> much lower than that of other nodes with smaller storage space. In larger>>>>> storage nodes the percentage of used disk space varies, but on average about>>>>> 30-50%. For the smaller storage nodes this number is as high as 99.9%. Is>>>>> this expected ? If so, then we are not using a lot of the disk space>>>>> effectively. Is this solved in a future release ?>>>>> >>>>> If no, I would like to know if there are any checks/debugs that one can>>>>> do to find an improvement with the current version or upgrading hadoop>>>>> should solve this problem.>>>>> >>>>> I am happy to provide additional information if needed.>>>>> >>>>> Thanks for any help.>>>>> >>>>> -Tapas>>>>> >>>> >>> >>> >>> >>> -->>> Bertrand Dechoux>>> >>> >>> >> >> >> >> -- >> Harsh J>

If your balancer does not exit, then it means its heavily working initerations trying to balance your cluster. The default bandwidthallows only for limited transfer speed (10 Mbps) to not affect thecluster's RW performance while moving blocks between DNs forbalancing, so the operation may be slow unless you raise the allowedbandwidth.

On Wed, Mar 20, 2013 at 7:37 AM, Tapas Sarangi <[EMAIL PROTECTED]> wrote:> Any more follow ups ?>> Thanks> -Tapas>> On Mar 19, 2013, at 9:55 AM, Tapas Sarangi <[EMAIL PROTECTED]> wrote:>>>>> On Mar 18, 2013, at 11:50 PM, Harsh J <[EMAIL PROTECTED]> wrote:>>>>> What do you mean that the balancer is always active?>>>> meaning, the same process is active for a long time. The process that starts may not be exiting at all. We have a cron job set to run it every 10 minutes, but that's not in effect because the process may never exit.>>>>>>> It is to be used>>> as a tool and it exits once it balances in a specific run (loops until>>> it does, but always exits at end). The balancer does balance based on>>> usage percentage so that is what you're probably looking for/missing.>>>>>>> May be. How does the balancer look for the usage percentage ?>>>> -Tapas>>>>>>> On Tue, Mar 19, 2013 at 6:56 AM, Tapas Sarangi <[EMAIL PROTECTED]> wrote:>>>> Hi,>>>>>>>> On Mar 18, 2013, at 8:21 PM, 李洪忠 <[EMAIL PROTECTED]> wrote:>>>>>>>> Maybe you need to modify the rackware script to make the rack balance, ie,>>>> all the racks are the same size, on rack by 6 small nodes, one rack by 1>>>> large nodes.>>>> P.S.>>>> you need to reboot the cluster for rackware script modify.>>>>>>>>>>>> Like I mentioned earlier in my reply to Bertrand, we haven't considered rack>>>> awareness for the cluster, currently it is considered as just one rack. Can>>>> that be the problem ? I don't know…>>>>>>>> -Tapas>>>>>>>>>>>>>>>> 于 2013/3/19 7:17, Bertrand Dechoux 写道:>>>>>>>> And by active, it means that it does actually stops by itself? Else it might>>>> mean that the throttling/limit might be an issue with regard to the data>>>> volume or velocity.>>>>>>>> What threshold is used?>>>>>>>> About the small and big datanodes, how are they distributed with regards to>>>> racks?>>>> About files, how is used the replication factor(s) and block size(s)?>>>>>>>> Surely trivial questions again.>>>>>>>> Bertrand>>>>>>>> On Mon, Mar 18, 2013 at 10:46 PM, Tapas Sarangi <[EMAIL PROTECTED]>>>>> wrote:>>>>>>>>>> Hi,>>>>>>>>>> Sorry about that, had it written, but thought it was obvious.>>>>> Yes, balancer is active and running on the namenode.>>>>>>>>>> -Tapas>>>>>>>>>> On Mar 18, 2013, at 4:43 PM, Bertrand Dechoux <[EMAIL PROTECTED]> wrote:>>>>>>>>>> Hi,>>>>>>>>>> It is not explicitly said but did you use the balancer?>>>>> http://hadoop.apache.org/docs/r1.0.4/commands_manual.html#balancer>>>>>>>>>> Regards>>>>>>>>>> Bertrand>>>>>>>>>> On Mon, Mar 18, 2013 at 10:01 PM, Tapas Sarangi <[EMAIL PROTECTED]>>>>>> wrote:>>>>>>>>>>>> Hello,>>>>>>>>>>>> I am using one of the old legacy version (0.20) of hadoop for our>>>>>> cluster. We have scheduled for an upgrade to the newer version within a>>>>>> couple of months, but I would like to understand a couple of things before>>>>>> moving towards the upgrade plan.>>>>>>>>>>>> We have about 200 datanodes and some of them have larger storage than>>>>>> others. The storage for the datanodes varies between 12 TB to 72 TB.>>>>>>>>>>>> We found that the disk-used percentage is not symmetric through all the>>>>>> datanodes. For larger storage nodes the percentage of disk-space used is>>>>>> much lower than that of other nodes with smaller storage space. In larger>>>>>> storage nodes the percentage of used disk space varies, but on average about>>>>>> 30-50%. For the smaller storage nodes this number is as high as 99.9%. Is>>>>>> this expected ? If so, then we are not using a lot of the disk space

>> On Mar 19, 2013, at 5:00 AM, Алексей Бабутин <[EMAIL PROTECTED]>> wrote:>> node A=12TB> node B=72TB> How many A nodes and B from 200 do you have?>>> We have more number of A nodes than B. The ratio of the number is about> 80, 20. Note that not all the B nodes are 72TB, that's a max value.> Similarly for A it is a min. value.>>> If you have more B than A you can deactivate A,clear it and apply again.>>> Apply what ? It may not be a choice for an active system and it may> cripple us for days.>> I suppose that cluster about 3-5 Tb.Run balancer with threshold 0.2 or 0.1.>>> You meant 3.5 PB, then you are about right. What this threshold does> exactly ? We are not setting the threshold manually, but isn't hadoop's> default 0.1 ?>>> Different servers in one rack is bad idea.You should rebuild cluster with> multiple racks.>>> Why bad idea ? We are using hadoop as a file system not as a scheduler.> How multiple racks are going to help in balancing the disk-usage across> datanodes ?>dfs.balance.bandwidthPerSec in hdfs-site.xml.I think balancer cant helpyou,because it makes all the nodes equal.They can differ only on balancerthreshold.Threshold =10 by default.It means,that nodes can differ up to350Tb between each other in 3.5Pb cluster.If Threshold =1 up to 35Tb and soon.In ideal case with replication factor 2 ,with two nodes 12Tb and 72Tb youwill be able to have only 12Tb replication data.

Best way,on my opinion,it is using multiple racks.Nodes in rack must bewith identical capacity.Racks must be identical capacity.For example:

It helps with balancing,because dublicated block must be another rack.

Why did you select hdfs?May be lustre,cephfs and other is better choise.>> -Tapas>>>> 2013/3/19 Tapas Sarangi <[EMAIL PROTECTED]>>>> Hello,>>>> I am using one of the old legacy version (0.20) of hadoop for our>> cluster. We have scheduled for an upgrade to the newer version within a>> couple of months, but I would like to understand a couple of things before>> moving towards the upgrade plan.>>>> We have about 200 datanodes and some of them have larger storage than>> others. The storage for the datanodes varies between 12 TB to 72 TB.>>>> We found that the disk-used percentage is not symmetric through all the>> datanodes. For larger storage nodes the percentage of disk-space used is>> much lower than that of other nodes with smaller storage space. In larger>> storage nodes the percentage of used disk space varies, but on average>> about 30-50%. For the smaller storage nodes this number is as high as>> 99.9%. Is this expected ? If so, then we are not using a lot of the disk>> space effectively. Is this solved in a future release ?>>>> If no, I would like to know if there are any checks/debugs that one can>> do to find an improvement with the current version or upgrading hadoop>> should solve this problem.>>>> I am happy to provide additional information if needed.>>>> Thanks for any help.>>>> -Tapas>>>>>>

On Mar 20, 2013, at 5:35 AM, Алексей Бабутин <[EMAIL PROTECTED]> wrote:> > > dfs.balance.bandwidthPerSec in hdfs-site.xml.I think balancer cant help you,because it makes all the nodes equal.They can differ only on balancer threshold.Threshold =10 by default.It means,that nodes can differ up to 350Tb between each other in 3.5Pb cluster.If Threshold =1 up to 35Tb and so on.

If we use multiple racks, let's assume we have 10 racks now and they are equally divided in size (350 TB each). With a default threshold of 10, any two nodes on a given rack will have a maximum difference of 35 TB, is this correct ? Also, does this mean the difference between any two racks will also go down to 35 TB ?> In ideal case with replication factor 2 ,with two nodes 12Tb and 72Tb you will be able to have only 12Tb replication data.

Yes, this is true for exactly two nodes in the cluster with 12 TB and 72 TB, but not true for more than two nodes in the cluster.

> > Best way,on my opinion,it is using multiple racks.Nodes in rack must be with identical capacity.Racks must be identical capacity.> For example:> > rack1: 1 node with 72Tb> rack2: 6 nodes with 12Tb> rack3: 3 nodes with 24Tb> > It helps with balancing,because dublicated block must be another rack.>

The same question I asked earlier in this message, does multiple racks with default threshold for the balancer minimizes the difference between racks ?

> Why did you select hdfs?May be lustre,cephfs and other is better choise.

It wasn't my decision, and I probably can't change it now. I am new to this cluster and trying to understand few issues. I will explore other options as you mentioned.

Are you running balancer? If balancer is running and if it is slow, tryincreasing the balancer bandwidthOn 24 March 2013 09:21, Tapas Sarangi <[EMAIL PROTECTED]> wrote:

> Thanks for the follow up. I don't know whether attachment will pass> through this mailing list, but I am attaching a pdf that contains the usage> of all live nodes.>> All nodes starting with letter "g" are the ones with smaller storage space> where as nodes starting with letter "s" have larger storage space. As you> will see, most of the "gXX" nodes are completely full whereas "sXX" nodes> have a lot of unused space.>> Recently, we are facing crisis frequently as 'hdfs' goes into a mode where> it is not able to write any further even though the total space available> in the cluster is about 500 TB. We believe this has something to do with> the way it is balancing the nodes, but don't understand the problem yet.> May be the attached PDF will help some of you (experts) to see what is> going wrong here...>> Thanks> ------>>>>>>>> Balancer know about topology,but when calculate balancing it operates only> with nodes not with racks.> You can see how it work in Balancer.java in BalancerDatanode about string> 509.>> I was wrong about 350Tb,35Tb it calculates in such way :>> For example:> cluster_capacity=3.5Pb> cluster_dfsused=2Pb>> avgutil=cluster_dfsused/cluster_capacity*100=57.14% used cluster capacity> Then we know avg node utilization (node_dfsused/node_capacity*100)> .Balancer think that all good if avgutil> +10>node_utilizazation>=avgutil-10.>> Ideal case that all node used avgutl of capacity.but for 12TB node its> only 6.5Tb and for 72Tb its about 40Tb.>> Balancer cant help you.>> Show me http://namenode.rambler.ru:50070/dfsnodelist.jsp?whatNodes=LIVEif you can.>>>>>>>>> In ideal case with replication factor 2 ,with two nodes 12Tb and 72Tb>> you will be able to have only 12Tb replication data.>>>>>> Yes, this is true for exactly two nodes in the cluster with 12 TB and 72>> TB, but not true for more than two nodes in the cluster.>>>>>> Best way,on my opinion,it is using multiple racks.Nodes in rack must be>> with identical capacity.Racks must be identical capacity.>> For example:>>>> rack1: 1 node with 72Tb>> rack2: 6 nodes with 12Tb>> rack3: 3 nodes with 24Tb>>>> It helps with balancing,because dublicated block must be another rack.>>>>>> The same question I asked earlier in this message, does multiple racks>> with default threshold for the balancer minimizes the difference between>> racks ?>>>> Why did you select hdfs?May be lustre,cephfs and other is better choise.>>>>>> It wasn't my decision, and I probably can't change it now. I am new to>> this cluster and trying to understand few issues. I will explore other>> options as you mentioned.>>>> -->> http://balajin.net/blog>> http://flic.kr/balajijegan>>

So the value is bytes per second. If it is running and exiting,it means ithas completed the balancing.On 24 March 2013 11:32, Tapas Sarangi <[EMAIL PROTECTED]> wrote:

> Yes, we are running balancer, though a balancer process runs for almost a> day or more before exiting and starting over.> Current dfs.balance.bandwidthPerSec value is set to 2x10^9. I assume> that's bytes so about 2 GigaByte/sec. Shouldn't that be reasonable ? If it> is in Bits then we have a problem.> What's the unit for "dfs.balance.bandwidthPerSec" ?>> ----->> On Mar 24, 2013, at 1:23 PM, Balaji Narayanan (பாலாஜி நாராயணன்) <> [EMAIL PROTECTED]> wrote:>> Are you running balancer? If balancer is running and if it is slow, try> increasing the balancer bandwidth>>> On 24 March 2013 09:21, Tapas Sarangi <[EMAIL PROTECTED]> wrote:>>> Thanks for the follow up. I don't know whether attachment will pass>> through this mailing list, but I am attaching a pdf that contains the usage>> of all live nodes.>>>> All nodes starting with letter "g" are the ones with smaller storage>> space where as nodes starting with letter "s" have larger storage space. As>> you will see, most of the "gXX" nodes are completely full whereas "sXX">> nodes have a lot of unused space.>>>> Recently, we are facing crisis frequently as 'hdfs' goes into a mode>> where it is not able to write any further even though the total space>> available in the cluster is about 500 TB. We believe this has something to>> do with the way it is balancing the nodes, but don't understand the problem>> yet. May be the attached PDF will help some of you (experts) to see what is>> going wrong here...>>>> Thanks>> ------>>>>>>>>>>>>>>>> Balancer know about topology,but when calculate balancing it operates>> only with nodes not with racks.>> You can see how it work in Balancer.java in BalancerDatanode about>> string 509.>>>> I was wrong about 350Tb,35Tb it calculates in such way :>>>> For example:>> cluster_capacity=3.5Pb>> cluster_dfsused=2Pb>>>> avgutil=cluster_dfsused/cluster_capacity*100=57.14% used cluster capacity>> Then we know avg node utilization (node_dfsused/node_capacity*100)>> .Balancer think that all good if avgutil>> +10>node_utilizazation>=avgutil-10.>>>> Ideal case that all node used avgutl of capacity.but for 12TB node its>> only 6.5Tb and for 72Tb its about 40Tb.>>>> Balancer cant help you.>>>> Show me http://namenode.rambler.ru:50070/dfsnodelist.jsp?whatNodes=LIVEif you can.>>>>>>>>>>>>>>> In ideal case with replication factor 2 ,with two nodes 12Tb and 72Tb>>> you will be able to have only 12Tb replication data.>>>>>>>>> Yes, this is true for exactly two nodes in the cluster with 12 TB and 72>>> TB, but not true for more than two nodes in the cluster.>>>>>>>>> Best way,on my opinion,it is using multiple racks.Nodes in rack must be>>> with identical capacity.Racks must be identical capacity.>>> For example:>>>>>> rack1: 1 node with 72Tb>>> rack2: 6 nodes with 12Tb>>> rack3: 3 nodes with 24Tb>>>>>> It helps with balancing,because dublicated block must be another rack.>>>>>>>>> The same question I asked earlier in this message, does multiple racks>>> with default threshold for the balancer minimizes the difference between>>> racks ?>>>>>> Why did you select hdfs?May be lustre,cephfs and other is better>>> choise.>>>>>>>>> It wasn't my decision, and I probably can't change it now. I am new to>>> this cluster and trying to understand few issues. I will explore other>>> options as you mentioned.>>>>>> -->>> http://balajin.net/blog>>> http://flic.kr/balajijegan>>>>>>-- http://balajin.net/bloghttp://flic.kr/balajijegan

Yes, thanks for pointing, but I already know that it is completing the balancing when exiting otherwise it shouldn't exit. Your answer doesn't solve the problem I mentioned earlier in my message. 'hdfs' is stalling and hadoop is not writing unless space is cleared up from the cluster even though "df" shows the cluster has about 500 TB of free space.

> -setBalancerBandwidth <bandwidth in bytes per second>> > So the value is bytes per second. If it is running and exiting,it means it has completed the balancing. > > > On 24 March 2013 11:32, Tapas Sarangi <[EMAIL PROTECTED]> wrote:> Yes, we are running balancer, though a balancer process runs for almost a day or more before exiting and starting over.> Current dfs.balance.bandwidthPerSec value is set to 2x10^9. I assume that's bytes so about 2 GigaByte/sec. Shouldn't that be reasonable ? If it is in Bits then we have a problem.> What's the unit for "dfs.balance.bandwidthPerSec" ?> > -----> > On Mar 24, 2013, at 1:23 PM, Balaji Narayanan (பாலாஜி நாராயணன்) <[EMAIL PROTECTED]> wrote:> >> Are you running balancer? If balancer is running and if it is slow, try increasing the balancer bandwidth>> >> >> On 24 March 2013 09:21, Tapas Sarangi <[EMAIL PROTECTED]> wrote:>> Thanks for the follow up. I don't know whether attachment will pass through this mailing list, but I am attaching a pdf that contains the usage of all live nodes.>> >> All nodes starting with letter "g" are the ones with smaller storage space where as nodes starting with letter "s" have larger storage space. As you will see, most of the "gXX" nodes are completely full whereas "sXX" nodes have a lot of unused space. >> >> Recently, we are facing crisis frequently as 'hdfs' goes into a mode where it is not able to write any further even though the total space available in the cluster is about 500 TB. We believe this has something to do with the way it is balancing the nodes, but don't understand the problem yet. May be the attached PDF will help some of you (experts) to see what is going wrong here...>> >> Thanks>> ------>> >> >> >> >> >> >>> >>> Balancer know about topology,but when calculate balancing it operates only with nodes not with racks.>>> You can see how it work in Balancer.java in BalancerDatanode about string 509.>>> >>> I was wrong about 350Tb,35Tb it calculates in such way :>>> >>> For example:>>> cluster_capacity=3.5Pb>>> cluster_dfsused=2Pb>>> >>> avgutil=cluster_dfsused/cluster_capacity*100=57.14% used cluster capacity>>> Then we know avg node utilization (node_dfsused/node_capacity*100) .Balancer think that all good if avgutil +10>node_utilizazation>=avgutil-10.>>> >>> Ideal case that all node used avgutl of capacity.but for 12TB node its only 6.5Tb and for 72Tb its about 40Tb.>>> >>> Balancer cant help you.>>> >>> Show me http://namenode.rambler.ru:50070/dfsnodelist.jsp?whatNodes=LIVE if you can.>>> >>> >>> >>> >>>> In ideal case with replication factor 2 ,with two nodes 12Tb and 72Tb you will be able to have only 12Tb replication data.>>> >>> Yes, this is true for exactly two nodes in the cluster with 12 TB and 72 TB, but not true for more than two nodes in the cluster.>>> >>>> >>>> Best way,on my opinion,it is using multiple racks.Nodes in rack must be with identical capacity.Racks must be identical capacity.>>>> For example:>>>> >>>> rack1: 1 node with 72Tb>>>> rack2: 6 nodes with 12Tb>>>> rack3: 3 nodes with 24Tb>>>> >>>> It helps with balancing,because dublicated block must be another rack.>>>> >>> >>> The same question I asked earlier in this message, does multiple racks with default threshold for the balancer minimizes the difference between racks ?>>> >>>> Why did you select hdfs?May be lustre,cephfs and other is better choise. >>> >>> It wasn't my decision, and I probably can't change it now. I am new to this cluster and trying to understand few issues. I will explore other options as you mentioned.

Thanks. We have a 1-1 configuration of drives and folder in all the datanodes.

-Tapas

On Mar 24, 2013, at 3:29 PM, Jamal B <[EMAIL PROTECTED]> wrote:

> On both types of nodes, what is your dfs.data.dir set to? Does it specify multiple folders on the same set's of drives or is it 1-1 between folder and drive? If it's set to multiple folders on the same drives, it is probably multiplying the amount of "available capacity" incorrectly in that it assumes a 1-1 relationship between folder and total capacity of the drive.> > > On Sun, Mar 24, 2013 at 3:01 PM, Tapas Sarangi <[EMAIL PROTECTED]> wrote:> Yes, thanks for pointing, but I already know that it is completing the balancing when exiting otherwise it shouldn't exit. > Your answer doesn't solve the problem I mentioned earlier in my message. 'hdfs' is stalling and hadoop is not writing unless space is cleared up from the cluster even though "df" shows the cluster has about 500 TB of free space. > > -------> > > On Mar 24, 2013, at 1:54 PM, Balaji Narayanan (பாலாஜி நாராயணன்) <[EMAIL PROTECTED]> wrote:> >> -setBalancerBandwidth <bandwidth in bytes per second>>> >> So the value is bytes per second. If it is running and exiting,it means it has completed the balancing. >> >> >> On 24 March 2013 11:32, Tapas Sarangi <[EMAIL PROTECTED]> wrote:>> Yes, we are running balancer, though a balancer process runs for almost a day or more before exiting and starting over.>> Current dfs.balance.bandwidthPerSec value is set to 2x10^9. I assume that's bytes so about 2 GigaByte/sec. Shouldn't that be reasonable ? If it is in Bits then we have a problem.>> What's the unit for "dfs.balance.bandwidthPerSec" ?>> >> ----->> >> On Mar 24, 2013, at 1:23 PM, Balaji Narayanan (பாலாஜி நாராயணன்) <[EMAIL PROTECTED]> wrote:>> >>> Are you running balancer? If balancer is running and if it is slow, try increasing the balancer bandwidth>>> >>> >>> On 24 March 2013 09:21, Tapas Sarangi <[EMAIL PROTECTED]> wrote:>>> Thanks for the follow up. I don't know whether attachment will pass through this mailing list, but I am attaching a pdf that contains the usage of all live nodes.>>> >>> All nodes starting with letter "g" are the ones with smaller storage space where as nodes starting with letter "s" have larger storage space. As you will see, most of the "gXX" nodes are completely full whereas "sXX" nodes have a lot of unused space. >>> >>> Recently, we are facing crisis frequently as 'hdfs' goes into a mode where it is not able to write any further even though the total space available in the cluster is about 500 TB. We believe this has something to do with the way it is balancing the nodes, but don't understand the problem yet. May be the attached PDF will help some of you (experts) to see what is going wrong here...>>> >>> Thanks>>> ------>>> >>> >>> >>> >>> >>> >>>> >>>> Balancer know about topology,but when calculate balancing it operates only with nodes not with racks.>>>> You can see how it work in Balancer.java in BalancerDatanode about string 509.>>>> >>>> I was wrong about 350Tb,35Tb it calculates in such way :>>>> >>>> For example:>>>> cluster_capacity=3.5Pb>>>> cluster_dfsused=2Pb>>>> >>>> avgutil=cluster_dfsused/cluster_capacity*100=57.14% used cluster capacity>>>> Then we know avg node utilization (node_dfsused/node_capacity*100) .Balancer think that all good if avgutil +10>node_utilizazation>=avgutil-10.>>>> >>>> Ideal case that all node used avgutl of capacity.but for 12TB node its only 6.5Tb and for 72Tb its about 40Tb.>>>> >>>> Balancer cant help you.>>>> >>>> Show me http://namenode.rambler.ru:50070/dfsnodelist.jsp?whatNodes=LIVE if you can.>>>> >>>> >>>> >>>> >>>>> In ideal case with replication factor 2 ,with two nodes 12Tb and 72Tb you will be able to have only 12Tb replication data.>>>> >>>> Yes, this is true for exactly two nodes in the cluster with 12 TB and 72 TB, but not true for more than two nodes in the cluster.

Then I think the only way around this would be to decommission 1 at atime, the smaller nodes, and ensure that the blocks are moved to the largernodes. And once complete, bring back in the smaller nodes, but maybe onlyafter you tweak the rack topology to match your disk layout more thannetwork layout to compensate for the unbalanced nodes.

> Thanks. We have a 1-1 configuration of drives and folder in all the> datanodes.>> -Tapas>> On Mar 24, 2013, at 3:29 PM, Jamal B <[EMAIL PROTECTED]> wrote:>> On both types of nodes, what is your dfs.data.dir set to? Does it specify> multiple folders on the same set's of drives or is it 1-1 between folder> and drive? If it's set to multiple folders on the same drives, it> is probably multiplying the amount of "available capacity" incorrectly in> that it assumes a 1-1 relationship between folder and total capacity of the> drive.>>> On Sun, Mar 24, 2013 at 3:01 PM, Tapas Sarangi <[EMAIL PROTECTED]>wrote:>>> Yes, thanks for pointing, but I already know that it is completing the>> balancing when exiting otherwise it shouldn't exit.>> Your answer doesn't solve the problem I mentioned earlier in my message.>> 'hdfs' is stalling and hadoop is not writing unless space is cleared up>> from the cluster even though "df" shows the cluster has about 500 TB of>> free space.>>>> ------->>>>>> On Mar 24, 2013, at 1:54 PM, Balaji Narayanan (பாலாஜி நாராயணன்) <>> [EMAIL PROTECTED]> wrote:>>>> -setBalancerBandwidth <bandwidth in bytes per second>>>>> So the value is bytes per second. If it is running and exiting,it means>> it has completed the balancing.>>>>>> On 24 March 2013 11:32, Tapas Sarangi <[EMAIL PROTECTED]> wrote:>>>>> Yes, we are running balancer, though a balancer process runs for almost>>> a day or more before exiting and starting over.>>> Current dfs.balance.bandwidthPerSec value is set to 2x10^9. I assume>>> that's bytes so about 2 GigaByte/sec. Shouldn't that be reasonable ? If it>>> is in Bits then we have a problem.>>> What's the unit for "dfs.balance.bandwidthPerSec" ?>>>>>> ----->>>>>> On Mar 24, 2013, at 1:23 PM, Balaji Narayanan (பாலாஜி நாராயணன்) <>>> [EMAIL PROTECTED]> wrote:>>>>>> Are you running balancer? If balancer is running and if it is slow, try>>> increasing the balancer bandwidth>>>>>>>>> On 24 March 2013 09:21, Tapas Sarangi <[EMAIL PROTECTED]> wrote:>>>>>>> Thanks for the follow up. I don't know whether attachment will pass>>>> through this mailing list, but I am attaching a pdf that contains the usage>>>> of all live nodes.>>>>>>>> All nodes starting with letter "g" are the ones with smaller storage>>>> space where as nodes starting with letter "s" have larger storage space. As>>>> you will see, most of the "gXX" nodes are completely full whereas "sXX">>>> nodes have a lot of unused space.>>>>>>>> Recently, we are facing crisis frequently as 'hdfs' goes into a mode>>>> where it is not able to write any further even though the total space>>>> available in the cluster is about 500 TB. We believe this has something to>>>> do with the way it is balancing the nodes, but don't understand the problem>>>> yet. May be the attached PDF will help some of you (experts) to see what is>>>> going wrong here...>>>>>>>> Thanks>>>> ------>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Balancer know about topology,but when calculate balancing it operates>>>> only with nodes not with racks.>>>> You can see how it work in Balancer.java in BalancerDatanode about>>>> string 509.>>>>>>>> I was wrong about 350Tb,35Tb it calculates in such way :>>>>>>>> For example:>>>> cluster_capacity=3.5Pb>>>> cluster_dfsused=2Pb>>>>>>>> avgutil=cluster_dfsused/cluster_capacity*100=57.14% used cluster>>>> capacity>>>> Then we know avg node utilization (node_dfsused/node_capacity*100)>>>> .Balancer think that all good if avgutil

> Yes, thanks for pointing, but I already know that it is completing the> balancing when exiting otherwise it shouldn't exit.> Your answer doesn't solve the problem I mentioned earlier in my message.> 'hdfs' is stalling and hadoop is not writing unless space is cleared up> from the cluster even though "df" shows the cluster has about 500 TB of> free space.>> ------->>> On Mar 24, 2013, at 1:54 PM, Balaji Narayanan (பாலாஜி நாராயணன்) <> [EMAIL PROTECTED]> wrote:>> -setBalancerBandwidth <bandwidth in bytes per second>>> So the value is bytes per second. If it is running and exiting,it means it> has completed the balancing.>>> On 24 March 2013 11:32, Tapas Sarangi <[EMAIL PROTECTED]> wrote:>>> Yes, we are running balancer, though a balancer process runs for almost a>> day or more before exiting and starting over.>> Current dfs.balance.bandwidthPerSec value is set to 2x10^9. I assume>> that's bytes so about 2 GigaByte/sec. Shouldn't that be reasonable ? If it>> is in Bits then we have a problem.>> What's the unit for "dfs.balance.bandwidthPerSec" ?>>>> ----->>>> On Mar 24, 2013, at 1:23 PM, Balaji Narayanan (பாலாஜி நாராயணன்) <>> [EMAIL PROTECTED]> wrote:>>>> Are you running balancer? If balancer is running and if it is slow, try>> increasing the balancer bandwidth>>>>>> On 24 March 2013 09:21, Tapas Sarangi <[EMAIL PROTECTED]> wrote:>>>>> Thanks for the follow up. I don't know whether attachment will pass>>> through this mailing list, but I am attaching a pdf that contains the usage>>> of all live nodes.>>>>>> All nodes starting with letter "g" are the ones with smaller storage>>> space where as nodes starting with letter "s" have larger storage space. As>>> you will see, most of the "gXX" nodes are completely full whereas "sXX">>> nodes have a lot of unused space.>>>>>> Recently, we are facing crisis frequently as 'hdfs' goes into a mode>>> where it is not able to write any further even though the total space>>> available in the cluster is about 500 TB. We believe this has something to>>> do with the way it is balancing the nodes, but don't understand the problem>>> yet. May be the attached PDF will help some of you (experts) to see what is>>> going wrong here...>>>>>> Thanks>>> ------>>>>>>>>>>>>>>>>>>>>>>>> Balancer know about topology,but when calculate balancing it operates>>> only with nodes not with racks.>>> You can see how it work in Balancer.java in BalancerDatanode about>>> string 509.>>>>>> I was wrong about 350Tb,35Tb it calculates in such way :>>>>>> For example:>>> cluster_capacity=3.5Pb>>> cluster_dfsused=2Pb>>>>>> avgutil=cluster_dfsused/cluster_capacity*100=57.14% used cluster capacity>>> Then we know avg node utilization (node_dfsused/node_capacity*100)>>> .Balancer think that all good if avgutil>>> +10>node_utilizazation>=avgutil-10.>>>>>> Ideal case that all node used avgutl of capacity.but for 12TB node its>>> only 6.5Tb and for 72Tb its about 40Tb.>>>>>> Balancer cant help you.>>>>>> Show me http://namenode.rambler.ru:50070/dfsnodelist.jsp?whatNodes=LIVEif you can.>>>>>>>>>>>>>>>>>>>>> In ideal case with replication factor 2 ,with two nodes 12Tb and 72Tb>>>> you will be able to have only 12Tb replication data.>>>>>>>>>>>> Yes, this is true for exactly two nodes in the cluster with 12 TB and>>>> 72 TB, but not true for more than two nodes in the cluster.>>>>>>>>>>>> Best way,on my opinion,it is using multiple racks.Nodes in rack must be>>>> with identical capacity.Racks must be identical capacity.>>>> For example:>>>>>>>> rack1: 1 node with 72Tb>>>> rack2: 6 nodes with 12Tb>>>> rack3: 3 nodes with 24Tb>>>>>>>> It helps with balancing,because dublicated block must be another rack.>>>>>>>>>>>> The same question I asked earlier in this message, does multiple racks

NEW: Monitor These Apps!

All projects made searchable here are trademarks of the Apache Software Foundation.
Service operated by Sematext