I commented out the default DEFAULT, and added the one suggested by you in your post. I then deleted the folder, and recreated it. Re-started AFP and I can now share it and edit the files and the wife can do the same from her iMac as well.

Another issue that popped up that was also resolved by this was a directory of photographs, all with identical permissions owners and groups. I could see them all, but my wife could only open about half of them, the others she could not.

So, all is well in the household again and I thank the responders for allowing me to move on.

Hi all, I've tested nexenta, openindiana, and now solaris express. I have a doubt with openindiana and solaris than I'm unable to resolve: There's any way to install them in a mirrored pool. I've tried to upgrade from single to mirrored without success.

I have been running napp-it on OpenSolaris b134 for a while now on my HP Proliant Microserver and it works really great. Thank you very much for this. I do have one ZFS related question that I hope you could answer.

Currently, whenever I send data to the server, I get maximum speeds of about 35MB/s (60-70% CPU usage) for about 10-15 seconds. The disk will then start writing and network transfer drops to zero and CPU usage goes to 100%. After 2 - 3 seconds the writes complete and the server starts "accepting" data again. I am copying iTunes TV shows, so it could be that the writing starts whenever the file has been copied over the network. What is really odd is that the RAM used stays at about 400MB (5%) and does not change at all during this process. I thought that ZFS would use all of the RAM I could throw at it. Is there something in my settings that I need to change? Will using more RAM increase the speeds of my network transfers?

Thanks in advance.

Edit: Could the slow CPU also be a bottleneck? I thought this would not matter too much with a fileserver. Also, everything is wired with gigabit LAN and CAT 5e cables.

Currently, whenever I send data to the server, I get maximum speeds of about 35MB/s (60-70% CPU usage) for about 10-15 seconds. The disk will then start writing and network transfer drops to zero and CPU usage goes to 100%. After 2 - 3 seconds the writes complete and the server starts "accepting" data again. I am copying iTunes TV shows, so it could be that the writing starts whenever the file has been copied over the network. What is really odd is that the RAM used stays at about 400MB (5%) and does not change at all during this process. I thought that ZFS would use all of the RAM I could throw at it. Is there something in my settings that I need to change? Will using more RAM increase the speeds of my network transfers?

Click to expand...

about sequential transfers
35 MB/s is a value, expected for your config

about RAM
With Solaris, you do not need to tune RAM-settings, it will use all available RAM for caching. Can't say anything about your value.

about CPU
You will not get the ZFS high-end features for nothing. It really needs RAM and CPU power to become really fast.
Things like deduplication needs additional 2,5 GB RAM per TB Data.

If you need better values:
switch off compression, use mirrors instead of raid-Z or add a SSD read-cache

I was copying data from an external USB drive through the Macbook to the server. Seems USB was the bottleneck. Copying directly to/from the Macbook yields speeds of about 70MB/s. The only funny thing is that the RAM usage still stays at 5%. If I open more programs on the server that changes, but file-copying activity seems to have zero effect.

Edit: My problem isn't really related to the speed of the transfers. It is more than 10x faster than my old Linksys NAS. I just expected the RAM to be used a lot more. Currently it looks like I do not need 8GB and would have been fine sticking with 1GB for my usage. Slightly annoying, but no big deal as the upgrade was fairly cheap. Also curious to see if other users are experiencing the same issue.

I'm running OpenIndiana b148 and Napp-it v. 0.415h and I'm having some difficulty with the clients seeing the anything on NFS shares. This same pool was working fine with NexentaCore 3.04 but I wanted to give OpenIndiana shot.

I have numerous folders shared out via NFS. None of my Ubuntu clients can see anything in the NFS shares. The only share they are able to see anything in is the VMs folder which is being accessed by ESXi.

I'm not sure what the difference between them can be? I have set the same permissions on the all the folders but still no luck!

The clients are able to create a folder but as soon as they refresh the directory it has disappeared.

Any ideas on how to troubleshoot this would be much appreciated!

I have opened the directory in terminal and tried to ls the directory:

luke@lukesmint ~/Backups $ ls

ls: cannot access WHS Programs: Operation not permitted

ls: cannot access Acer Aspire One Data: Operation not permitted

I have got all the folders set to 775 so not entirely sure what is going on!

I'd like to second the request for something in the napp-it gui to automate mirroring the root pool in solaris. I get the idea of a hardware-based enclosure, but we can't always incorporate that into a chassis, and it places a piece of unmanaged hardware between us and our system pool.

Also, I see one thing that may be a bug: When joining an Active Directory domain, the process changes the nameserver from an ip to a dns name. Considering that without the nameserver ip, you can't look up the nameserver dns name, it appears to break the process. Changing the nameserver back to an ip and initiating the final command "smbadm join -u aduser domain.local" results in success usually.

I like the auto-jobs for snapshots, but it would be sweet if there could be automated profiles to one-click a snapshot schedule kind of like what time slider does (but maybe a bit more transparent). It would also be cool if you could integrate replication to an external host with this schedule.

All in all, wonderful work dude. A bit more feature complete and I can stop scratching my head so much trying to decide between this and paying nexenta.

I'm running OpenIndiana b148 and Napp-it v. 0.415h and I'm having some difficulty with the clients seeing the anything on NFS shares. This same pool was working fine with NexentaCore 3.04 but I wanted to give OpenIndiana shot.

I have numerous folders shared out via NFS. None of my Ubuntu clients can see anything in the NFS shares. The only share they are able to see anything in is the VMs folder which is being accessed by ESXi.

I'm not sure what the difference between them can be? I have set the same permissions on the all the folders but still no luck!

The clients are able to create a folder but as soon as they refresh the directory it has disappeared.

Any ideas on how to troubleshoot this would be much appreciated!

I have opened the directory in terminal and tried to ls the directory:

luke@lukesmint ~/Backups $ ls

ls: cannot access WHS Programs: Operation not permitted

ls: cannot access Acer Aspire One Data: Operation not permitted

I have got all the folders set to 775 so not entirely sure what is going on!

Click to expand...

seems to be a permission problem.

try:
- set folder permission to 777 and
- set folder default acl to modify or full to @everyone

-share the same folder via cifs and check acls
(needed permission is modify for everyone on files and folders)

if you use NFS3, unix permissions are important
if you use NFS4, you must look at acls also.

I'd like to second the request for something in the napp-it gui to automate mirroring the root pool in solaris. I get the idea of a hardware-based enclosure, but we can't always incorporate that into a chassis, and it places a piece of unmanaged hardware between us and our system pool.

Also, I see one thing that may be a bug: When joining an Active Directory domain, the process changes the nameserver from an ip to a dns name. Considering that without the nameserver ip, you can't look up the nameserver dns name, it appears to break the process. Changing the nameserver back to an ip and initiating the final command "smbadm join -u aduser domain.local" results in success usually.

I like the auto-jobs for snapshots, but it would be sweet if there could be automated profiles to one-click a snapshot schedule kind of like what time slider does (but maybe a bit more transparent). It would also be cool if you could integrate replication to an external host with this schedule.

All in all, wonderful work dude. A bit more feature complete and I can stop scratching my head so much trying to decide between this and paying nexenta.

Click to expand...

i use also ips to join a domain.
it works without problem, sometimes i had to try twice.
but i will look at next time

about replication between/inner hosts
on the way, already a menue item but not ready to use

about NexentaStor EE
napp-it is a university project without support based on my needs, while NexentaStor EE is
for Enterprise use. Napp-it and free ZFS OS's based on OpenSolaris will not survive without Nexenta
and its efforts in illumos. So keep on using NexentaStor EE if you use it for business.
They have a lot of features, i do not need for my own and therefor you will not see them in napp-it
- beside the problem that there are only a few persons contributing code to napp-it, and my time is limited.

i use also ips to join a domain.
it works without problem, sometimes i had to try twice.
but i will look at next time

about replication between/inner hosts
on the way, already a menue item but not ready to use

about NexentaStor EE
napp-it is a university project without support based on my needs, while NexentaStor EE is
for Enterprise use. Napp-it and free ZFS OS's based on OpenSolaris will not survive without Nexenta
and its efforts in illumos. So keep on using NexentaStor EE if you use it for business.
They have a lot of features, i do not need for my own and therefor you will not see them in napp-it
- beside the problem that there are only a few persons contributing code to napp-it, and my time is limited.

Gea

Click to expand...

The Oracle guide uses dns names (here), but I tried to use an ip there and it still failed, so maybe it is also not starting the smb services?

Nexentastor has it's perks, but I'm not a fan of the debian userland, the pricing tiers (by useable storage?) or the fact that many of the settings are more complex to set up than they even would be at the command line. They also deviate from normal terminology for quite a few things (yeah I know for many things the terminology is interchangeable/unclear) and they change the way certain things are accessed, such as separating comstar folders from the standard folders interface.

Oh, and if you use delorean, it auto-creates folders for all of it's jobs that are shared and visible via smb (tested with windows).

That said, I look forward to seeing how they progress once they switch to illumos-base.

The Oracle guide uses dns names (here), but I tried to use an ip there and it still failed, so maybe it is also not starting the smb services?

Click to expand...

yes, it must be running prior to join

Nexentastor has it's perks, but I'm not a fan of the debian userland, the pricing tiers (by useable storage?) or the fact that many of the settings are more complex to set up than they even would be at the command line. They also deviate from normal terminology for quite a few things (yeah I know for many things the terminology is interchangeable/unclear) and they change the way certain things are accessed, such as separating comstar folders from the standard folders interface.

Oh, and if you use delorean, it auto-creates folders for all of it's jobs that are shared and visible via smb (tested with windows).

That said, I look forward to seeing how they progress once they switch to illumos-base.

Click to expand...

Yes indeed - i bought several licenses of NexentaStor and some of these points were the reason i started napp-it two years ago.

I was copying data from an external USB drive through the Macbook to the server. Seems USB was the bottleneck. Copying directly to/from the Macbook yields speeds of about 70MB/s. The only funny thing is that the RAM usage still stays at 5%. If I open more programs on the server that changes, but file-copying activity seems to have zero effect.

Edit: My problem isn't really related to the speed of the transfers. It is more than 10x faster than my old Linksys NAS. I just expected the RAM to be used a lot more. Currently it looks like I do not need 8GB and would have been fine sticking with 1GB for my usage. Slightly annoying, but no big deal as the upgrade was fairly cheap. Also curious to see if other users are experiencing the same issue.

P.S. All my numbers come from the System Monitor in OpenSolaris.

Click to expand...

Yes, I have the same numbers on my system - sits at 5.5% regardless of what data is being copied.
8Gig of ram Solaris Express 11

My problems getting smartmon and the compilers and what-all downloaded? Turns out my gateway does all kinds of web filtering, including blocking certain types of content it thinks risky - that is what was throwing the 403 error. I disabled the filter and re-did the install and it went off just fine. Sorry...

Thanks for the great web UI. It's a lot simpler than when I start a few years back. I think it covers most of my needs except for one minor thing.

What's the best way to backup a zfs file system from one pool to another? I usually run a raidz and I have a separate external drive which I backup important data daily from the raidz. If something happens to my raidz I have my data on the external drive.

I have some scripts to do zfs send and zfs receive but is it possible to setup this in the web interface? I tried to use the replicate function but wasn't able to get it working. I set the job up and click 'Execute' but when I check the backup pool nothing is there.

I love napp-it, thanks a ton for developing it. It makes administering my system a ton easier.

I was wondering if you had any plans to include Analytics of some kind? Having simple dropdown menus to select things I'm interested in and getting pretty pictures is a hell of a lot better than trying to navigate the bizarre dtrace invocations.
Duplicating the functionality of the Nexenta or Sun Storage Analyitics packages would basically eliminate any interest I would having in moving away from the free/open source stuff, and I would personally be willing to donate ~100 euros if this could be included in a future release.

I love napp-it, thanks a ton for developing it. It makes administering my system a ton easier.

I was wondering if you had any plans to include Analytics of some kind? Having simple dropdown menus to select things I'm interested in and getting pretty pictures is a hell of a lot better than trying to navigate the bizarre dtrace invocations.
Duplicating the functionality of the Nexenta or Sun Storage Analyitics packages would basically eliminate any interest I would having in moving away from the free/open source stuff, and I would personally be willing to donate ~100 euros if this could be included in a future release.

What happens if the host drive fails and there is no backup of the info? How would I reclaim my zfs pool back? Is it as simple as reinstall on a new drive and then import the array back in???

BTW,
Thanks for Napp-it, it makes installing solaris much easier. I'm looking to use Open Indiana for my os cause time slider looks really neat. I have purchased nearly all the parts you have recommended so it should be smooth sailing

What happens if the host drive fails and there is no backup of the info? How would I reclaim my zfs pool back? Is it as simple as reinstall on a new drive and then import the array back in???

BTW,
Thanks for Napp-it, it makes installing solaris much easier. I'm looking to use Open Indiana for my os cause time slider looks really neat. I have purchased nearly all the parts you have recommended so it should be smooth sailing

Click to expand...

You can plugin your ZFS pool in any other computer with any disk controller and import your pool without problem if the new computer supports the used pool version. You do not need any config-files, just import. It is suggested to export the pool but you can import even a destroyed pool if you have all the disks - but you cannot import a newer pool.

for example:
if you create a pool in Solaris Express with pool Version 31, you cannot import the pool in OpenIndiana (max. poolversion 28) or Nexenta (max poolversion 26).

If your pools contains a missing write cache drive, its also not possible to import.

One question: Under user menu, by default, the only user that appears is my unix user. I add a second one without problem. Now I go to windows to configure the security in one shared folder. The second added user appears, but my "unix user" doesn't. It's normal? there's any way to use that user?

a "bug": under smartinfo-> howto, you have to comment DEVISCAN (at least in my system), not uncomment it

One question: Under user menu, by default, the only user that appears is my unix user. I add a second one without problem. Now I go to windows to configure the security in one shared folder. The second added user appears, but my "unix user" doesn't. It's normal? there's any way to use that user?

a "bug": under smartinfo-> howto, you have to comment DEVISCAN (at least in my system), not uncomment it

Click to expand...

CIFS/SMB passwords have another format than unix passwords.
If CIFS/SMB is configured (was done by napp-it installer) and you reenter the password at console with

passwd username

a additional CIFS/SMB password is created and you can then use this user for CIFS/SMB.

Here is a semi related question: If you have dedupe active on your ZFS pool, that is exported as a iSCSI target to a NTFS Filesystem, how well would DeDupe work ? Would you get the same "quality" of DeDupe ?

Another question. I am considering exporting ZFS as a iSCSI target to a Windows server mainly due to easy management of security (both on share and file level), our organization often requires fast and granular control of ntfs & share permissions.

I am trying to combine this with ZFS/DeDupe/Compression for storage size optimization mainly.

OpenSolaris (Which is now OpenIndiana) has the highest version of ZFS amongst the non-oracle Solaris ZFS distributions (v28).

You also have Solaris Expresss 11 which is what everyone is trying to keep up with (as it's the actual ZFS developer) and they are up to ZFS v31 I believe. That one is free to use for private, non-commercial users.

Here is a semi related question: If you have dedupe active on your ZFS pool, that is exported as a iSCSI target to a NTFS Filesystem, how well would DeDupe work ? Would you get the same "quality" of DeDupe ?

Another question. I am considering exporting ZFS as a iSCSI target to a Windows server mainly due to easy management of security (both on share and file level), our organization often requires fast and granular control of ntfs & share permissions.

I am trying to combine this with ZFS/DeDupe/Compression for storage size optimization mainly.

Opinions ?

Click to expand...

1.
Dedup works on block level. it does not matter if you use it via iSCSI or CIFS/NFS.
You have always the same benefit (do not forget the RAM problem, if you want to use dedup,
you should have at least additional 2-4 GB RAM per TB data + SSD read cache)

2. CIFS server has nearly the same behaviour like a real Windows server with ACL on share and file level.
I replaced all of my Windows servers with Nexenta/OI. (Windows AD, about 600 user)

1.
Dedup works on block level. it does not matter if you use it via iSCSI or CIFS/NFS.
You have always the same benefit (do not forget the RAM problem, if you want to use dedup,
you should have at least additional 2-4 GB additional RAM per TB data + SSD read cache)

2. CIFS server has nearly the same behaviour like a real Windows server with ACL on share and file level.
I replaced all of my Windows servers with Nexenta/OI. (Windows AD, about 600 user)

Gea

Click to expand...

Agreed that you can do the same things, but there is a huge difference in users/admins right clicking, and adding groups, and users with a familiar interface, or editing samba text files and settings permissions.

Obviously, file permissions you can control from windows anyway, but share permissions are tricky, as they can only be administrated from the sharing end.

Agreed that you can do the same things, but there is a huge difference in users/admins right clicking, and adding groups, and users with a familiar interface, or editing samba text files and settings permissions.

Obviously, file permissions you can control from windows anyway, but share permissions are tricky, as they can only be administrated from the sharing end.

Click to expand...

Do not use Samba, use integrated CIFS server and manage all ACL settings from Windows
(on a file or folder base or at share level with windows computer management - connect to Solaris box)

There is nearly no difference from a users view. You can handle it from your Windows machine (nearly) as you would with a 2003/2008 server.

ps
main advantage of using Solaris instead of Windows via iSCSI
better perormance than with ISCSi
you can use ZFS snaps with Windows previous version
You have access independant from Windows ACL settings as root
(have had a lot of problems with backups and users, removing admins from ACL)
more simple tha a two server config

problems:
some Unix/Solaris specials one should know about
you can only use one AD server. If this server is down, you have to rejoin to backup server
you cannot share nested folders

OpenSolaris (Which is now OpenIndiana) has the highest version of ZFS amongst the non-oracle Solaris ZFS distributions (v28).

You also have Solaris Expresss 11 which is what everyone is trying to keep up with (as it's the actual ZFS developer) and they are up to ZFS v31 I believe. That one is free to use for private, non-commercial users.

Click to expand...

How can OpenIndiana be the highest version of ZFS if Solaris Expresss 11 is the original developer?

And why do no other OS have ZFS, is it so hard to put it in linux, or even windows?

I ask'd on the FreeBSD irc channel and they said it was because it was copyrighted to FreeBSD, which make no sense if Sun own it? And these OS'es are doing it better.

And are OpenIndiana and Solaris Expresss 11 compatible with some linux applications like FreeBSD is?

And since I'm a GUI man I like GUI's, I'm not totally wiped when it comes to terminal, but I'm not a genies either. So as a GUI man, which operating system is doing it best?

Until now, there is no serious ZFS development outside SUN/Oracle!
Free BSD is far behind (but V28 is on the way).
(...)
I hope, this may change a little with Illumos
(Base of next Nexenta/ OpenIndiana release)
Gea

Click to expand...

Looks like it is starting to - COMSTAR just got iSCSI unmap support (when a block is freed on a SCSI target mode device a notifcaiton bubbles back up, so the zpool could reclaim the free space, a trim command could be passed, etc.)

Linux has a ZFS in the kernel project and a ZFS via FUSE project. FreeBSD of course has the most mature non-solaris based implementation. The issue is time/people/resources. Who is going to work on, keep up with, and maintain the ports? Are they going to do it for free? Get paid? Who is going to pay them? If they do it for free is there enough of a critical mass for other people to pick up if they drop out/loose interest?

I am playing around with a 32 2T Drive Chasis (Actually, terrible 3ware 9750 controller can only export 32 Units, and I am playing with single disks). But nappit / openindiana is only seeing 16 drives ? Is there is a fix, something I need to change to get it to detect all drives ?

I have no freaking clue how to compile problems for different operating systems :/ I don't even know what it means.

This server I'm building is a NAS, so my main focus if on ZFS compability and hardware compability. So if any of these operating systems are better I would chose them if they can run vmware or some for of vm software, and they must be able to use as a web server.

And since FreeBSD really is not a GUI systems I don't like that part.

And in worst case, I can just switch from this to FreeBSD without damaging any of my data? After all its the same filesystem, right?