Author
Topic: Design Question - New system install (Read 2019 times)

Long time, no input from me. Been working on moving and whatnot and had to abandon my efforts on my LMCE system. I'm back and looking to get into the nitty gritty.

I have this concept which i will try out and I was wondering if any of you had any input on the matter.

I just recommissioned my Beefy PC into a new Ubuntu 12.04 LTS server and made it a samba server so it will be my network storage for both windows and linux PCs. Its a Q6600 with 8GB RAM and a 512MB video card (dual gigabit NICs, will be adding a third).

I also have Virtualbox installed on here and what I am thinking of doing is running LMCE inside the VM and dedicated the necessary resources to LMCE to be my core. (also bridge 2 of my gigabit NICS for LMCE and leave the third as a NIC dedicated for my host server. also mount whatever USB devices to my LMCE image that I need).

That way i can use LMCE as a standard core and also have the benefit of having all of my media stored in a network share for anyone to access.

Later, I will add separate nettop boxes to be my MDs throughout the house.

Does anyone have any reservations about this? I figured this would be nice to do since i have plenty of processing power on my PC and want to maximize the use.

Poside's right. LinuxMCE can handle being in charge of the network attached storage, just fine without another server running. The main reason to run LinuxMCE inside a VM would be to run some other OS side-by-side, for research or craps-n-giggles.

Logged

See my User page on the LinuxMCE Wiki for a description of my system configuration (click the little globe under my profile pic).

He might have other workload he wants to run on that same server. There's nothing wrong with virtualizing the core (or anything else for that matter) if you have the reason and know what you're doing. I'm running 5 other VM's in addition to a virtual LMCE core, which lets me experiment with LMCE without disrupting the "production" environment. I'll probably stand up anther Virtual core as a dev environment to work on some stuff, and connect it to the virtual MD I sometimes play with.

On a six core AMD 2.8GHz system with 8GB RAM, I'm running the Core, a Windows Home Server 2011 instance (iTunes server for all the iDevices), a Astaro VPN end-point server, my mail/calendar server (Zarafa), MrHouse home automation, and a Ubuntu Desktop environment as a virtual hosted desktop. During peaks, my load is about half what the system is capable of; I'm I/O bound more than CPU bound due to SATA disks. When SSD's drop in price, I'm going to pick up a bigger one for the VM disk storage (I have the various databases on LVM's from a 40G SSD to handle the IOPS requirements, and to lessen the load/latency from the disks).

As far as networking goes, I have two NIC's in the system. Eth0 is br_ext, a bridged network device connected to my home router which gives outside access. Eth1 is br_int, another bridge, but for the internal LMCE managed network. The core owns that network, so any physical device connected to the switch on that NIC is seen by the core, as is any virtual Server connected to the br_int. The virtual core's eth0 is connected to br_ext for Internet access, and for access from the existing production environment. It's eth1 is connected to br_int, and it provides dhcp and all related services to the "internal" network, as per LMCE's architecture. So, I can run both environments in parallel, without affecting the wife and kids.

So, the takeaway is, if you have a reason to virtualize it, go for it. If you don't have a use-case figured out, then you might not want to, because it can be more work. With great flexibility comes a lot of (your) overhead.

I personally can't get access to my PCI/PCIe cards, only because my motherboard doesn't support AMD's IOMMU (the technology that allows you to map the physical devices to virtual machines). VT/d is the Intel version. Most server-class motherboards support it, which allows for Xen's/KVM's PCI pass-through to work in datacentre environments. Ditto for VM-DirectPath, which is VMWare's version of the same concepts. So, if I wanted to fork out the money for an 890FX or higher chipset motherboard, I could get it to work. I'm not rolling in dough, so I'll be improvising by running a Myth-backend on the Host OS which will manage the cards, while the master backend resides in a VM. Next time I have to life-cycle hardware, I'll be getting a mobo with IOMMU support.

So, yes it does work; it's just dependent on having the right hardware support for it. The processor, chipset and BIOS all have to support it, which can be challenging with consumer hardware. It can be more challenging with Intel, because some processors support VT/d, and some don't; buyer beware. The only downside to passing through cards is that it negates the ability to Vmotion a guest from one host to another, because it introduces a physical dependency.