Content count

Joined

Last visited

Community Reputation

About LeeBear

I'm sure someone will correct me but if you don't want DSM to see the USB boot drive after it loads don't you just modify your boot img .cfg file and put the PID and VID of your USB device instead of the default one. The default for nanoboot is something like this:
kernel zImage ihd_num=0 netif_num=1 syno_hw_version=DS3612xs sn=B3J4N01003 vid=0x0EA0 pid=0x2168
You can plug your USB drive in a windows machine and use device manager/view resource to get the VID and PID of the USB drive.

The way Plex transcode is by default it will try and transcode 120 seconds of the video as fast as it can, that's why every CPU maxes out regardless of speed. Then it will throttle down and up as it tries to maintain that buffer.
While transcoding is CPU intensive it's also I/O intensive as there are combination reads and writes, if you don't want your NAS services to be impacted during the transcode I suggest you put your apps/Plex Temp folders on separate spindles then your media. Personally I use an SSD for my apps/temp and keep my media/files on a separate volume.

The general rule is for any .XXXX version updates you can't use the built in web GUI in DSM because we don't have a real Synology onboard Flash RAM to update. The update in the web GUI basically updates the onboard flash with the DSM .pat file, reboots, then updates the DSM partition on the hard drives. Since we obviously don't have the onboard flash RAM it will fail.
We however emulate this process by replacing our boot device (IMG, ISO, USB) with the newer Nanoboot/Gnoboot version, essentially faking updating the Flash RAM. Then we used the upgrade/degrade function in the bootmenu to emulate the DSM partition update on the hard drives.
There are some important differences between the two though. The process we have to do is actually more a "migration" from one DSM to another (removing the hard drives and putting it in another DSM). This is important to understand because when you migrate from one DSM to another you may lose certain settings that are tied to a DSM (domain permssions, etc). So before you migrate you should ALWAYS do a Control Panel -> Update & Restore -> Configuration Backup, this will create an .dss file you that you can use to restore the settings after the migration.
Now with incremental updates, ie. .4493 update 1, you can do through the web GUI as I believe those updates don't write to the Flash RAM and just updates the DSM partitions.

Ahh, another one bites the dust. I can't help you sorry, maybe someone else can.
Warning for the future or other users: RDM does not pass SMART data along so XPenology can never know when a disk is failing.
Set it up in ESXi or use passthrough.
I have taken the disks and connected them directly to another PC. There's nothing wrong with them and the data is still there. I think this is a logical problem that somehow DSM has corrupted the partition table or something similar to that.
We can argue all day about the use of RDM but Nindustries provides no real alternative as using an ESXi virtual disk doesn't pass SMART data to XPenology and doing a passthrough requires at least 2 controllers, one for the Datastore for your ESXi host and one for the passthrough to the VM. The majority of the people where won't have a dedicated controller nor would Nanoboot/Gnoboot have the driver to support it if it was passed through. Remember Nanoboot doesn't have the vast driver support like FreeNAS or VMware, that's one of the reasons we virtualize, if Nanoboot has drivers for your machine of course you're better off running it Bare Metal.
Anyways back to amuletxheart's problem. First off I don't think you should ever expand a volume by more then 1 drive at a time if you only have a single drive redundancy (RAID 5/SHR) as the drive you are adding is usually untested and can cause unrecoverable error if multiple drives fail. I know it's tempting to save a rebuild but think of it like this if you expand from 4 to 5 drive and one of the drives fail you should be still able to recover data from the 4 drives as during the expansion data is written to 5 drives but only 4 is needed for recovery. If you expand with 2 drives going from 4 to 6, data is written to all 6 drives and 5 drives is needed for recovery, if the 2 new drives fail (because they are untested) then you will not be able to recover the data because you only have 4 good drives.
I'm trying to figure out your problem... and I'm thinking that maybe you mapped the 2 drives to the same physical drives... is that a possibility? Also this won't help with your problem but if you change your VM's SCSI controller to LSI Logic SAS controller DSM will show your "real" drive model and serial number instead of VMware's virtual drive. This won't solve your problem but it's an easy way to see if two drives have the same serial number then you know you have two RDM's mapped to 1 physical drive. Lastly if you went back to your original 4 drive configuration before your second expansion does DSM see the volume/repair it?

In hyper-V I suggest you use the ISO version of nanoboot instead of the IMG/VHD as the boot device. Just replace your IDE(0:0) boot hard drive with an IDE(0:0) CD/DVD, you won't have to do a reinstall or anything. DSM seems to always pick up the spare space in the boot drive that's why you are seeing the 2MB drive. With a CD/DVD it won't show up. If you need to modify the ISO image file you can use Winiso 5.3 it's free.

In step 2 of my guide where you modify the IMG file so the boot drive doesn't show up you can also change the serial number there. There's a line that has SN= that's the serial number. Change to one you like but please backup your settings first in DSM as I believe you may lose some share permissions if you change SN as DSM now thinks it's a different unit.

I have a similar setup to yours and noticed the same slower transfer speed in ESXi. I mentioned it in this thread http://xpenology.com/forum/viewtopic.php?f=2&t=3191. When I have time this weekend I will do some tests under ESXi and see if speed can improve. I think it's a driver issue (setting) with ESXi and I hope to figure it out as ESXi hosts is way easier to manage then Hyper-V Server.

I'll try to answer your questions the best I can, since I am not an expert. Your first question regarding using virtual RDM and why I suggest using it instead of a virtual disk (vmdk). Besides getting better speed you get portability of your data. This means you can take your hard drives and put it in a real Synology DSM and your data will still work, or you can put the drives in a Hyper-V environment and your data will still be there. I have personally tested Hyper-V so I know it works. The reason it works is because if you use RDM you are letting the DiskStation software have direct access to the disk drive to create the DSM partitions. If we compare what is physically stored on the disk drive if you took the hard drive out and put it in a non ESXi environment you will get this (I will use a 1TB drive as an example):
virtual RDM drive: 2GB Partition (DSM software version), 2.5GB Partition (Volume information), 965 GB Partition (Data)
virtual (vmdk) drive: 1TB Partition with disk.vmdk file
As you can see if you put a virtual drive instead of an RDM drive into a real Synology DiskStation it won't see the data or DSM partition and it will tell you to format the drive (you lose all your data). With the RDM drive it will see the DSM partition and know it came from a DiskStation and mount the volume with data in tact.
Regarding your question of virtual RDM not being a good idea I don't think that's correct. We have to use the virtual RDM "trick" because VMware by default doesn't let you use a local drive as an RDM... it only allows you to use a SAN or other storage. I don't believe there's any risk in using virtual RDM because the Synology software is actually handling the RAID, if there was some error DSM would alert you.
Now for your second question of installing Nanoboot directly to your system. Yes you can do that, you will need to use a USB or CD to boot off of, you can't boot off an SSD because DSM will detect the SSD drive and format it during install. Remember in a real DiskStation there is a flash drive inside that it boots off of we "fake" this using Nanoboot on a USB or CD. I will not write a short guide on doing a direct install because there are similar guides already (Look at install guides for HP N40L/N54L). The reason for this is because every system has different hardware, unlike ESXi where it doesn't matter what hardware you have the Virtual Machine will be the same. Also keep in mine that Nanoboot has to have drivers for your hardware if you want to do a direct install unless you know how to add your own drivers and compile Nanoboot yourself. You can always unplug your drives and make a USB stick with Nanoboot on it and see if your hardware is support then decide if you want to do a direct install.

Is your VM setup to use "Legacy Network Adapter"? If that's the case that would explain your 10MB/s speed and not seeing the VMQ setting. If you use the regular "Network Adapter" you should see and extra "Hardware Acceleration" option like this:
From my searching around VMQ can be beneficial in some configuration and in some usage scenario but in my particular system running Nanoboot it was causing problems.

Poechi's link pretty much covers how to install Nanoboot on Hyper-V whether you are using Windows 8 w/Hyper-V, Windows Server w/Hyper-V, or pure Hyper-V server. The only difference is with Hyper-V server there is no GUI you have to set it up via the command line. Once it's setup though you can remotely manage the server using RSAT on a Windows 8 machine. This Guide here pretty much tells you how to do all the command line stuff so you can manage it using a GUI on another machine. Just remember Hyper-V Server 2012 requires Windows 8 or Windows 8.1 to manage, and Hyper-V Server 2012 R2 requires Windows 8.1 to manage.

I wasn't getting great speed's with ESXi 5.5, it was erratic usually fluctuating between 20-80 MB/s using virtual RDM and E1000 network. I have a feeling it's the network drivers or some settings in it that wasn't correct. Anyways to test my theory I decided to duplicate the setup on Hyper-V Server 2012 R2... this was easy to do as I had my ESXi booted off a USB stick (but Datastore on an SSD) and Hyper-V booting off an SSD. Since I didn't use virtual drives for my data the only drive I had to convert to VHD format was my 32GB virtual drive for application. Once I created the Hyper-V VM and mounted the drives, DSM started up fine, no reinstall of anything. I did some copying over the network to see if the speed was better and sure enough it is... this is the results I'm getting in Hyper-V while copying approximately 100GB of mostly 50MB files:
As you can see the speeds way better then ESXi. This is a 4 drive (5400 rpm) in SHR configuration over 1 network port. I know DSM usually shows higher rates cause there's 1 drive of parity (so approximately 33% higher rate in a 4 drive configuration), Windows shows transfer rate between 90-110 MB/s very consistent. I had to disable VMQ (Virtual Machine Queuing) setting in the Network controller to get this results. With VMQ on the speed was erratic like in ESXi. I will move back to ESXi when I have time and try to figure out if there's an equivalent setting that needs to be disabled to get the same performance as Hyper-V. If there is then it's going to be a tough choice... ESXi is more widely supported and very simple to manage, while Hyper-V Server is ridiculously hard to manage (no gui, can be remote managed only by Win 8.1 machine, usually requires a domain), has good performance and lower power consumption (approx 43W vs 50W).

I tried and and trashed it after didn't finished indexing in 48h. And my collection was rather small at that time
Nevertheless, let's stop whining and check what this man has to say about our proposals
If you have "Generate media index files during scans" enabled it will take a long time (probably more then 48 hours depending on the type of machine you have. This is different then a regular library scan which downloads the movie info and cover art. The media index actually goes through each video and generates thumbnails at various times so on players that support it you will see the "pop up" thumbnail when you are fast forwarding. Depending on your system this can take 30 minutes or more per movie.
One of the awesome features of Plex is it's ability to transcode on the fly (if you have the hardware power to do it), this is what separates it from DLNA (and XBMC), also the simplicity of sharing your server (just make a free Plex account). For example besides me being able to watch my media on my TV when I'm at home (like you can with DLNA, XBMC), if I'm away from home but have an internet connection I can continue to watch my media on my tablet, phone, web browser as the Plex server will transcode on the fly whatever you're watching so it can stream smoothly over the internet.
Now the sharing of Servers takes this even further. I have friends and relatives that created free plex accounts, I just add there names to my friends list and now they can also stream media from my server. You have a Samsung Smart TV... my brother who lives in another city has a Samsung Smart TV with Plex app, he just sets it up with his free plex account and now he can stream movies over the internet from my server as if they were local, you can't do that with DLNA. Plex is quite powerful for sharing media.

Are you saying you're only getting 10 mb/s = 1 MB/s transfer speed? That doesn't sound right. If you mean you are getting 10 MB/s transfer speed then check your network adapter setting it probably means it's set to 100 mb/s instead of Gigabit speed. Also in the virtual network adapter setting disable VMQ (Virtual Machine Queuing) it seems to slow down or make the network transfer erratic. I was getting around 25 MB/s with VMQ enable now I'm getting this:
Not bad for 5400 rpm drives in SHR. I was never able to achieve that kind of speed when running ESXi (erratic 20-80 MB/s) which made me try out Hyper-V Server 2012 R2. So far I'm pretty impress with Hyper-V's performance and lower power usage (I measured around 7-8W lower) but I'm finding it very difficult to manage (no device manager are you kidding me!)