/dev/sdb1 is a partition and has been created ot the LUN /dev/sdb using fdisk.

Any idea what might be wrong? Any answers are highly appreciated. Thanks!

Edited by: CoBy on 03.01.2013 15:36

Here is part of logfile of the root.sh - tool. Meanwhile I tried deinstalling the clusterware and cleaning up, then I verified the server configuration with cluvfy and all was OK. I used also another Disk Group name. Same problem..

oracleasm is released for this version, I installed it using yum and the software repository for this version, so it must be released. It is also working fine, so far. I think the problem is somewhere else, but i still cannot prove this.
SELinux is disabled anyway.

first of all you don't need to deinstall everything. Since 11.2.0.2 the root.sh script can be resumed. So simply fix the error and rerun root.sh.
Even if that fails, you still could use $GI_HOME/crs/install/rootcrs.pl -deinstall -force -lastnode to wipe everything. So no need to deinstall everything....

Regarding the error you have -

What does the ASM Log say? Maybe the Diskgroup could not be created correctly (Device permission?), or is already existent?
What more information do you see in the logfile of root.sh (or clusterware logfiles if already existent)?

I went for an installation without ASMLIB. I configured the drives using udev. For some reason the asmlib could not hanldle the permissions on the drives correctly.
After mapping with udev and some reboots to test the availability, I coulnd run the grid infrastructure installation without problems.