Guided by the wisdom of AskMeFi, I recently purchased a tricked-out desktop computer. It's a whitebox build, and rather than have the builders install the operating system(s) for me, I asked that they just format the drive. I want the learning experience of installing my own. I know how to find tutorials-- what I'd like are your tips and tricks, both about the OSs and the software running on them, that I might not find in a tutorial.

I have more CS knowledge than the average joe, but less than a CS sophomore. I'll be leaning heavily on those tutorials.

I'll be using the machine for gaming and for my software dev/sysadmin curriculum. I want to make optimal use of my hardware, without overclocking. (Is that a contradiction in terms?)

I'm thinking dual-boot, Windows 7 and a yet-to-be-determined flavor of Linux. I have experience with Debian, Fedora, and Ubuntu, but none of them jump out at me as being better than the others for my purposes. I want a distro that won't do too much hand-holding, but is usable and compatible enough that I will spend most of my (non-gaming) time booted into the Linux side.

There are coolers and thermal compound and the chassis is a Corsair 250D, about a foot cubed, though I doubt any of that is relevant. I'm aware an the issue with the RAM and standard BIOS settings, and I intend to tweak them accordingly.

I'm mainly concerned with maintenance things, like TRIM, that might otherwise be set up for me. Again, I'm still fairly ignorant. I would also love suggestions of killer background programs a la f.lux.

I still have the boot drive (a 64GB Crucial SSD) from my old laptop, and I'm going to convert it into an external. I was running Windows 7/Debian Squeeze on it. Are either of those OSes connected to the boot drive, still, or is that not how it works? Forgive my ignorance.

This might be too basic a note, but you will want to be sure to install Windows first when setting up a dual-boot system. Linux is happy to share, but Windows wants only the entire HDD to itself.

I can't recommend any version of Linux over another from a dev/sysadmin perspective, but I recommend that you consider Linux Mint Cinnamon as well. It is based on Ubuntu (and uses the same repositories) but is more flexible, imo. I use it as my primary OS.posted by 1367 at 10:11 AM on December 25, 2014

To set up dual-boot boxes I generally partition the disks using a Linux live CD and format one of them NTFS, then install Windows to that, then install Linux. This way you end up with a Windows installation that only occupies a single partition, and you don't have to fight with Windows over the bootloader.

UEFI is a complete pain in the arse, and I avoid it where possible. On the other hand, GPT is a much nicer partitioning system than MBR. GRUB2 will happily chainload to the Windows bootloader regardless of whether you use MBR or GPT partitioning, so once you've built a dual-boot system it will work. However, Windows' own bootloader doesn't support booting off a GPT-partitioned drive using non-UEFI firmware, so trying to install Windows into that setup causes grief.

My installation path of least resistance for disks I want to partition with GPT is to create an initial MBR partition table defining only the Windows NTFS partition, format it from the Linux live installation I'm using to partition it using ntfs-3g, then boot the Windows setup disc and install Windows (doing it this way stops Windows making its idiot 100MB separate boot partition). Once the Windows installation is done, I wipe the partition table and re-do it as GPT making sure that the new table has an NTFS partition in exactly the same place as the original MBR table did. This is easy if you start the original MBR partition at sector 2048 (the first non-zero 1MiB boundary).

I never make more than one partition per drive for Linux, and it's always an LVM partition. On rotating disks the first LVM volume I allocate will always be swap, to put it on the fastest part of the drive - doesn't matter for your SSDs. GRUB2 has been able to boot directly from LVM for a long time now, so there's really no need for a separate non-LVM boot partition.

Are either of those OSes connected to the boot drive, still, or is that not how it works?

Not really sure what you're asking here. If you've got GRUB2 installed on that 64GB SSD, you should find that Debian will still boot from it even after you move it out to a USB enclosure; Windows will bluescreen. If you don't want to boot from it, but just use it for data storage, then you should find that Windows will happily work with its existing NTFS partition (though you might have to fool around with permissions and ownership to get rid of Access Denied problems) and any Linux will give you full access to all partitions.

I like XFCE much better than GNOME or Unity. I use it on Debian Testing, which is a rolling release that gets the occasional Big Freeze before turning into the next Debian Stable.

If you're going to use Debian Testing, install build-essential and set up your sources.list to point to the testing repo for binaries and the unstable repo for sources, like this:

That way you can easily reach up into Unstable to build the occasional bleeding-edge package from source when you need to, and when that version does eventually move to Testing, it will automatically be included amongst your regular updates.posted by flabdablet at 10:35 AM on December 25, 2014

What are your goals? Using Linux without actually doing anything concrete usually tends to end up not teaching very much.

You mention a dev/sysadmin program, so let me turn your question around a bit. Twiddling bits like setting up TRIM or doing "background"-y type things is good to know, but tends toward being the kind of skill that's looked up and forgotten, over and over. As someone who does these kinds of things for a living, I've long since abandoned getting a single system set up "perfectly" and carrying it around (for example, tweaking configuration files or kernel module setups, or &c). The reason is because my work now involves building and tearing down entire clusters - dozens! hundreds! - of machines in a day. This involves knowing lots about systems at a very low level, but only in the service of configuring as little as possible. I start with a "base" setup that only just barely boots, and I layer atop it exactly what I need and no more.

So that's the industry use case for what I'm about to suggest: install whatever version of Linux you find sufficiently shiny - Arch is decent. Mint is good. Ubuntu is fine. It doesn't really matter. Because the real "work" you want to do should be virtualized. If you really want to get dirty with the nuts and bolts of Linux at a low level, build Linux from Scratch or Yocto and run that in a VM. Install salt and use that to do tweaking of configurations in Debian or CentOS VMs, instead of doing it by hand. Build your own CI "pipeline", and spin up tiny virtual machines to host whatever hobby services you want to play with.

There's a long and storied history of Linux users tweaking their configs just so. That was the hobby. I get it! But, to be frank? Most default installations of Linux these days are basically totally optimized and don't provide much opportunity for improvement nor un-optimized installation.posted by TheNewWazoo at 10:40 AM on December 25, 2014 [1 favorite]

I have VirtualBox installed on my Windows 7 desktop, and it's great to be able to quickly create a VM to test something.

Also, I agree with TheNewWazoo with regard to solving a concrete problem. Pick something interesting you'd like to do, and then figure out how to do it. Merely working your way through Linux tutorials is fine, but it gets old pretty quick.

Finally, you didn't ask about this, but you might consider hacking a Raspberry Pi device. They're cheap, they run Linux, and there are all sorts of cool projects you can do with them.posted by alex1965 at 11:47 AM on December 25, 2014 [1 favorite]

I would set it up as dual-boot with separate OS partitions for Linux and Windows. That gives you the option of running them either as VMs or booting into one or the other to run it on the 'bare metal'. If you only set up one partition and install one OS on it, you are stuck with that OS as the 'bare metal' / dom0 operating system and must always run that one as the master with the other in a VM. I do not like that configuration, personally (because I think Linux makes a good day-to-day OS, but you might want to run Windows on the metal for gaming).

Anyway... the process I go through when I set up machines is something like this:

Partition the boot drive. (I am assuming here you are not going for a RAIDed boot-drive configuration. That introduces some very non-trivial additional complexity, and would tip the balance into not dual-booting and just doing one bare-metal OS with VMs.) You can use whatever drive partitioning program you want, I normally use a Linux bootable CD of some sort and use fdisk.

Install Windows. You point it to one of the partitions and it installs itself. It will also overwrite the MBR on the drive, which is obnoxious, but that's why we do it first.

Install all the Windows drivers for your hardware. For a whitebox build, this may involve a lot of screwing around and obtaining drivers for the mobo chipset, sound card, WiFi, etc. Win7 is not quite as dependent on vendor drivers as previous versions of Windows; you may only need to install it for particular special features of various bits of hardware (e.g. you might need it for special overspeed features of the WiFi chipset but not for normal basic operation; you might need it for 5.1 audio but not regular stereo, whatever). Basically, get Windows working more or less the way you want it to work.

Install Linux onto the other partition. Most Linux distributions make a dual-boot setup pretty easy. The Linux installer will rewrite the boot-drive's MBR with something that invokes GRUB or another open-source bootloader, letting you select the operating system on boot.

Finish installing and configuring Linux. You may want to mess with the bootloader configuration to make it always show the OS selection menu, or make it default to a particular choice each time (maybe the last choice, maybe not). I also always turn off the stupid 'quiet mode' options so that I can see the boot process in case something is wonky.

Boot into whatever your day-to-day OS is going to be, and install your virtualization solution (I use VMWare Workstation). Set up a virtual machine, using the other operating system's partition as the VM's attached storage. Boot it up and test.

If everything works right, you should now have a system where you can boot into one OS and run the other inside a VM container, or you can boot directly into it for maximum performance. The only caveat is that depending on what version of Windows you use, sometimes Windows gets cranky about hardware changes (thinks that going between bare-metal and the VM container is a hardware change). I've never cared enough to figure out why it does this, it might be that you need a 'professional' or 'enterprise' version of Windows, or one with a VLK. (Supposedly you can call up MSFT if it keeps complaining and you have the proper license and they can make it shut up, but like I said, I've never bothered.)

This works pretty well as long as you have an uncomplicated hardware configuration, e.g. one big SATA drive, no RAID, no SSD hybrids or anything.posted by Kadin2048 at 12:46 PM on December 25, 2014

N.B. you can also install Windows onto the drive (taking the entire drive as a partition), then use a partitioning program to shrink the Windows partition to create room for a second one, then install Linux on the second partition. I don't like to do this because I don't trust partition-shrinking in general, but sometimes it's useful if you don't want to completely wipe out a factory Windows install (e.g. on a laptop you just bought which has a ton of drivers installed and doesn't have install media). I think this method was required for WinXP and previous so there are a lot of tutorials around for it. With Win7, you can do a "custom" install and target a particular partition, not the whole drive.posted by Kadin2048 at 12:50 PM on December 25, 2014

Virtual machines are great, you can find virtually any single task image you would care to define. For a general purpose install with lots of apps included I suggest Ninite - for a one off it's not much but if you have multiple VMs it will let you breathe easier knowing where everything is.posted by ptm at 6:25 PM on December 25, 2014

There are a bunch of little things that I find need doing on new installs. Maybe they're more automagically taken care of when installing a full bells-and-whistles desktop, but I find them missing or not quite how I like them when installing a basic system and building up. (This is Linux side)

I migrated a Debian sid system to new hardware a couple of weeks ago, following a Debian UEFI Howto managed to get it UEFI booting. You'll need an Ubuntu 14 CD/USB for the bootstrapping, Ubuntu is UEFI ready (and you need to have booted via UEFI to do the conversion from non-UEFI to UEFI). `gfdisk` can convert from MBR to GPT and you'll need space for a couple extra partitions (one tiny one for BIOS usage, one a bit larger for the actual EFI partion, mine are 1M and 60M just because that was easy).

So... if you want to go UEFI you might want to go Ubuntu if you have no preference.

Your old Linux probably uses UUID's in GRUB/fstab. If you manage to get to the right bootloader instance from the BIOS boot menu things will probably work regardless of drive locaion.posted by zengargoyle at 2:42 AM on December 26, 2014

Update: So, I'm checking through StackExchange how much graphics performance I'd lose trying this, but right now, I'm actually looking at running a bare-metal hypervisor with Windows 7, Archlinux, Debian, etc. as virtual machines, rather than dual-booting or stacking OSs.

I'm also happy to note that my CS 'curriculum' includes lots of concrete goals and projects, and that I do indeed have a Raspberry Pi to play with.posted by dee lee at 9:11 AM on December 26, 2014

If you are into "software dev/sysadmin curriculum" as you put it, running Linux in a virtual machine is more than sufficient. Don't complicate your life with bare metal hypervisors, simply not worth it.

You have an i7. That has 8 cores. You have 16 GB RAM. That's good enough for virtual machines.

You can have the best of all possible worlds. You don't have to mess with Linux graphics drivers and what nots. Have Windows as your base. Game when you want to.

You can learn all your system admin curriculum and software dev stuff from Linux VMs. Message me if you have any questions regarding this setup, I am both a sufficient Windows power user and use plenty of Linux VMs all the time, I should be able to help you out.

(The above will not work if you intend to game on Linux .. please tell me that's not your goal)posted by harisund at 8:31 PM on December 26, 2014 [1 favorite]

Tags

Share

About Ask MetaFilter

Ask MetaFilter is a question and answer site that covers nearly any question on earth, where members help each other solve problems. Ask MetaFilter is where thousands of life's little questions are answered.