I found this xfs_db(8) - Linux man page . xfs_db commands may be run interactively (the default) or as arguments on the command line. Multiple -c arguments may be given. The commands are run in the sequence given, then the program exits. This is the mechanism used to implement xfs_check(8). So appar...

This script basically calls xfs_db, which executable doesn't seem to have an online manual page, so I think it's Iomega specific. In that case it's strange that the copyright is from 2003, Silicon Graphics. Anyway, looking at the commandline passed to xfs_db, I think it calls xfs_check. And the name...

Sorry I wasn't more clear. The idea was to find the executable using 'which', and then substitute the /full/path/to/, so in this case cat /sbin/xfs_check That can give a bunch of garbage, if it is a binary, or something readable, when it's a script. so I did: /sbin/xfs_check /dev/mapper/73e7530f_vg-...

That looks as if the called xfs_check is actually a script, which does some additional actions, before calling the 'orginal' xfs_check. But it's unclear to me why it prints that line. You can find the executable by executing which xfs_check and see if it's a script cat /full/path/to/xfs_check . I th...

maybe i can try to check first and repair later, how do I use the instruction? man xfs_check . So basically umount /mnt/pools/A/A0 xfs_check /dev/mapper/73e7530f_vg-lv161e5b81 You can even do a 'dry run' on repairing, using the '-n' option of xfs_repair. In both cases I don't know what kind of usab...

I have two disks of 2T each that i can put in the two slots that are empty now on the NAS. That's not enough, I'm afraid. Looking at your post, the volume mounted on /mnt/pools/A/A0 is /dev/dm-0, which is 5796528128 kB. Almost 6TB. Using 2 2TB disks a raid0 array of 4TB can be made. The good news i...

I see no hardware problems. So it's likely 'only' a damaged filesystem. Can you post the output of cat /proc/mounts cat /proc/partitions cat /proc/mdstat And a 2nd question, how valuable is your data? An attempt to repair the filesystem can cause more damage. So in case of valuable data you shouldn'...

Not directly. Entware needs an even newer kernel than Entware-ng did, and for Entware-ng I had to rebuild libc, to include some stubs. I did the same for Entware, but it also needed a kernel function (a usermode implementation for cmpxch64), which I added using a kernel module. At first glance that ...

Maybe the issue I discovered could be related to a blackout which produced an unattended power-off of the NAS and consequently the PID file was not correctly deleted? Absolutely. If you have regularly this kind of problems, you can create an (executable) script /opt/etc/init.d/S00Cleanup: if [ ! -f...

OK, I send you a download link. Put the script in /i-data/sysvol/.PKG/Tweaks/gui/Tweaks/plugins/ , reboot the NAS (or restart Tweaks from the webinterface), and create a new stick. (You can re-use you current one if you delete the file /tmp/.Tweaks/pkgs_on_stick.flag. That file is the flag for Tweak...