Description of problem:
Adding a s3ql (fuse.s3ql) based brick returns no error message when being added to the volume, however the .glusterfs directory is not created nor there is any access to the brick via the volume.
Version-Release number of selected component (if applicable):
S3QL 1.11.1
gluster 3.3beta3
Fedora 16
How reproducible:
Always
Steps to Reproduce:
1.Install glusterfs and s3ql
2.mkfs.s3ql s3://bucketname; mount.s3ql s3://bucketname /mnt/s3ql
3.add-brick the /mnt/s3ql mount to the volume
Actual results:
The .glusterfs dir is not created on the brick (/mnt/s3ql) and no files or dirs can be created on the volume.
Expected results:
Files and dirs should be able to be written to the volume based on the s3ql brick.
Additional info:
Both gluster and s3ql work fine independently.
Please note that s3backer which is another S3 block-based file system like s3ql, works fine as a brick. However s3backer's file system type returned by df -T is ext4. I believe the file system type of s3ql (fuse.s3ql) is what makes gluster choke.
http://code.google.com/p/s3ql/http://code.google.com/p/s3backer/

Hi guys,
It is long ago this issue was raised so I do no longer have GlusterFS running on this particular server. However I still found the log in question which oddly enough have some even older dates:
[root@banana ~]# cat /var/log/glusterfs/glusterfsd.log
2009-01-12 13:25:14 E [xlator.c:120:xlator_set_type] xlator: dlopen(/usr/lib64/glusterfs/1.3.9/xlator/features/locks.so): /usr/lib64/glusterfs/1.3.9/xlator/features/locks.so: cannot open shared object file: No such file or directory
2009-01-12 13:25:14 E [spec.y:123:section_type] parser: failed to set the node type
2009-01-12 13:25:14 E [spec.y:232:file_to_xlator_tree] parser: yyparser () exited with YYABORT
2009-01-12 13:25:14 E [glusterfs.c:189:xlator_graph_get] glusterfs: specification file parsing failed, exiting
2009-01-12 13:25:14 E [glusterfs.c:537:main] glusterfs: Unable to get xlator graph
[root@banana ~]#
You may keep this bug closed for now though. In case I come across the same issue again I can return to the bug.
Thanks.

because of the large number of bugs filed against mainline version\ is ambiguous and about to be removed as a choice.
If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.