I discovered another flaw with the Microserver. The rear USB ports are too recessed to accept some USB devices. I have a wifi dongle and a USB to Serial converter that won't insert far enough into the slot to be detected. They work fine in the front USB sockets.

I've just had a problem with the suggested modWith the new SY1225SL12HPVC fan, the system switched on normally, the RPM were correctly reported, everything seemed ok.But after 5 minutes the microserver died The hp support thinks that with no led light at all, there must be a problem with the PSU...What did go wrong?

I've got a HP N40L Microserver at the moment. The stock unit is quite noisy, most of the noise coming from the PSU.

The Scythe Slipstream fan mod mentioned in the SPCR article is easy to do and works well. There's a torx key included inside the server's case door that you can use for removing the fan.

It's worth bearing in mind that not all Picopsu models will fit in the HP N40L Microserver. The Picopsu 160-XT power supply, which has four capacitors, is too tall and won't slide underneath the hard drive bay.

I'm going to use a Samsung 830 128gb SSD for the operating system. One of these Icy Dock EZConvert MB882SP-1S-2B 2.5" to 3.5" SATA SSD / Hard Drive Converter lets you put an SSD in one of the main drive bays.

I was originally building this server with stock parts for someone else. It was going to be in a garage as their home server. It was to have the 250gb Seagate Barracuda 7200.12 drive for the Windows Home Server 2011 operating system and then a pair of 3TB Seagate Barracuda 7200.12 drives in Raid 1 for data storage. It was all working fine and seemed very stable, no problems using the 3TB hard drives either. They show up in the BIOS and you initialise them as GPT partition through Windows Disk Management to use their full capacity.

Then the mains power lead was accidentally unplugged whilst the server was running, as though there had been a power cut...

You wouldn't think an unscheduled shutdown would be a big deal but when the server was rebooted it had lost the raid array in the BIOS, all the data drive shares had vanished in Windows Home Server 2011 and the two 3TB drives which had used GPT file partitions were showing as no longer initialised also. The main 250gb operating system drive with its MBR partition was fine and still working normally

In a way it was good that it happened whilst I was still setting it up. If all the data had been lost from both of the Raid drives when it was in use that would have been much worse. It would have been connected to a UPS for normal use, reducing the risk from a power cut. Even so, having the potential for losing everything like that with the 3TB drives was too much of a risk. I ended up using a Synology DS213+ NAS box for the garage install instead, leaving me with this HP Microserver. The plan now is to get some 2TB drives for it (using MBR partitions), AHCI BIOS mode, not have the drives mirrored and instead use a scheduled robocopy to back one drive up onto the other drive.

Then the mains power lead was accidentally unplugged whilst the server was running, as though there had been a power cut...

You wouldn't think an unscheduled shutdown would be a big deal but when the server was rebooted it had lost the raid array in the BIOS, all the data drive shares had vanished in Windows Home Server 2011 and the two 3TB drives which had used GPT file partitions were showing as no longer initialised also. The main 250gb operating system drive with its MBR partition was fine and still working normally.

And that, ladies and gentlemen, is what you get for using Windows and proprietary software RAID.

After that happened the first time I also tried the two 3TB drives using the built in Windows drive mirroring after formatting and re-initialising them. Windows Home Server 2011 built the mirror with no problems. As a test I then pulled the power cable whilst the server was running, just to see what it would do. Again the unscheduled shutdown lost the drive letters and GPT partitions on the data drives. The 250gb operating system drive had no issues.

I think it's something to do with the 3TB drives and using GPT partitions. Hopefully using the smaller 2TB drives, MBR partitions and no mirroring will work better.

And that, ladies and gentlemen, is what you get for using Windows and proprietary software RAID.

Given that the RAID ever showed up in the BIOS means it has nothing to do with Windows software RAID.

Uhm, you realise that nothing is done in the hardware except print some pretty lines on the screen and tell the Windows driver there's an array, right? It's pure software RAID. Performed by a closed driver with unpredictable behaviour on a single OS, with no guarantees of backward compatibility.

The interesting thing was that cutting the power didn't just break the mirror but the 3TB drives were also no longer initialised, losing their storage partitions and drive letters. The 250gb drive was unaffected.

I should really have tried it again with the drives configured as seperate disks and no mirroring to see if it did the same thing then as well. After the drives had already lost their partitions twice I decided it wasn't worth continuing though. I couldn't guarantee that setup would be reliable enough to store real data on without possibly losing everything on the 3TB drives at some point in future.

Don't mention drive failures. They're never happy events, especially when you have to phone up and explain that because there are no backups of the broken office PC with all the payroll records on it's going to mean sending the failed drive off to a recovery company and it will cost £500. That was earlier this year.

For backing up Windows Home Server 2011 Microsoft have disabled backing up to a remote network share as an option. Whilst I was setting up the microserver with Windows Home Server 2011 I tried some of the other third party backup options. You have the Cloudberry backup addin for Windows Home Server 2011. It has a free trial and in theory allows you to backup to a network share. It didn't work very well though. I wasn't impressed at all.

A better option for Windows Home Server 2011 that seems more effective is to use SyncToy 2.1 or robocopy as a scheduled task. SyncToy 2.1 is nice and easy to use whilst robocopy is similar but run from the command line. You create a scheduled task so that the data is backed up regularly. If it's mirrored ("echo" in SyncToy 2.1) then after the first run the backup utilities do a differential backup, copying only any modified files, making subsequent backups fast. They also copy the actual files, saving on needing to extract them from a backup archive as with the built in Windows backups.

Uhm, you realise that nothing is done in the hardware except print some pretty lines on the screen and tell the Windows driver there's an array, right? It's pure software RAID. Performed by a closed driver with unpredictable behaviour on a single OS, with no guarantees of backward compatibility.

Umm, you realise that this still has nothing to do with Windows Software RAID right? Windows Software RAID is very specifically the RAID functionality built into Windows by Microsoft. It has nothing to do with this third party RAID driver that happens to run on Windows. If the BIOS lost the RAID config then nothing written by Microsoft had any involvement. It is completely AMD's and or HP's fault.

The BIOS code does more than just print information and then hand-off to the driver. It has to have a full software RAID stack or the array would not be bootable.

Also, it is not single OS. Many software RAID and FakeRAIDs are supported by open source drivers, ie dmraid in Linux or ataraid in FreeBSD.

Uhm, you realise that nothing is done in the hardware except print some pretty lines on the screen and tell the Windows driver there's an array, right? It's pure software RAID. Performed by a closed driver with unpredictable behaviour on a single OS, with no guarantees of backward compatibility.

Umm, you realise that this still has nothing to do with Windows Software RAID right? Windows Software RAID is very specifically the RAID functionality built into Windows by Microsoft. It has nothing to do with this third party RAID driver that happens to run on Windows.

Where on earth did I specify that I was talking about 'Windows Software RAID'? That's right, nowhere.

Quote:

If the BIOS lost the RAID config then nothing written by Microsoft had any involvement. It is completely AMD's and or HP's fault.

BIOS never had a config to lose, it just scans the drives. The ones the driver and/or Windows (or a combination of the two badly written pieces of software) managed to leave mangled.

Quote:

The BIOS code does more than just print information and then hand-off to the driver. It has to have a full software RAID stack or the array would not be bootable.

Not a full stack to be bootable, and once it hands over to the OS, it's done.

Quote:

Also, it is not single OS. Many software RAID and FakeRAIDs are supported by open source drivers, ie dmraid in Linux or ataraid in FreeBSD.

Until you find out it's not conforming to one of the loose, rarely employed standards because they decided to tweak something without telling anybody, or you can't get it recognised because the BIOS did something weird with the layout.

WR304 wrote:

stuff

Wild idea: Don't use Windows. You'll be amazed at how much money and trouble you save yourself.

Where on earth did I specify that I was talking about 'Windows Software RAID'? That's right, nowhere.

You haven't just implied, but outright said that this is somehow the fault of Windows. It has nothing to do with Windows.

Quote:

BIOS never had a config to lose, it just scans the drives. The ones the driver and/or Windows (or a combination of the two badly written pieces of software) managed to leave mangled.

Some BIOSes do remember the config, some don't. That's besides the point. The point is that the driver prevents Windows from accessing the metadata area of the disk. The drives are actually presented to the OS slightly smaller than their actual size. Windows cannot mangle what it cannot access. If the driver screws up that again is not the fault of Windows.

Quote:

Not a full stack to be bootable, and once it hands over to the OS, it's done.

Full enough. They present the arrays as fully read/write BIOS disks. Even DOS can access them sans driver.

Quote:

Wild idea: Don't use Windows. You'll be amazed at how much money and trouble you save yourself.

Now you have just outed yourself as a clueless troll. All OSes have had data loss bugs, both open and closed source.

You haven't just implied, but outright said that this is somehow the fault of Windows. It has nothing to do with Windows.

I never said it was the built in RAID functionality of Windows. I also never said I don't think Windows is a broken, unreliable, untrustworthy pile of crap.

Quote:

The point is that the driver prevents Windows from accessing the metadata area of the disk. The drives are actually presented to the OS slightly smaller than their actual size. Windows cannot mangle what it cannot access. If the driver screws up that again is not the fault of Windows.

Fine, valid point. Windows still sucks.

Quote:

Now you have just outed yourself as a clueless troll.

I don't agree with this opinion. You are a troll.

See? Two can play that game.

Quote:

All OSes have had data loss bugs, both open and closed source.

Yes, and I'll give you one guess which OS I've consistently not been able to trust, and which OS has proprietary drivers for everything which regularly have crippling bugs for untold lengths of time with no possible resolution from the user end. Windows is not innocent here.

You disagree with my opinion and suggestions, fine. Be an adult about it, would you?

Yes, and I'll give you one guess which OS I've consistently not been able to trust, and which OS has proprietary drivers for everything which regularly have crippling bugs for untold lengths of time with no possible resolution from the user end. Windows is not innocent here.

I fully admit Windows has tonnes of bugs. But we are specifically talking about data loss bugs. Sorry, but Linux "wins" in that department. I'm not going to speculate as to why, but look at all the filesystem failures Linux has had. Ext4, Ext3, RiserFS, XFS all have had major data loss bugs, some quite recently. BTRFS isn't fully ready but doesn't look to be much better.

Before you call me an MS fanboy, I would pick FreeBSD (and probably OpenBSD) as better than both Windows and Linux from a data loss perspective.

I once pointed out to the admins of a dedicated 100% Linux shop that their most critical infrastructure ran Windows. When they thought I was BSing them I told them to do an OS fingerprint scan on their EMC SAN controllers. Windows handles critical data just fine.

Quote:

You disagree with my opinion and suggestions, fine. Be an adult about it, would you?

I'm not the one yelling about how Windows sucks every chance he gets and blaming it for a problem it not only did not, but could not cause.

I fully admit Windows has tonnes of bugs. But we are specifically talking about data loss bugs. Sorry, but Linux "wins" in that department.

I never even suggested Linux, merely 'not Windows'. And Linux, unlike pretty much any other OS, has active development on filesystems without serious financial backing. Every other system has big money behind it. See ZFS. Good filesystem, if a little overcomplex for smaller setups. One of the largest IT companies in the world developed it. Now, how many people were behind ext2/3/4, again?

Quote:

I'm not going to speculate as to why, but look at all the filesystem failures Linux has had. Ext4, Ext3, RiserFS, XFS all have had major data loss bugs, some quite recently. BTRFS isn't fully ready but doesn't look to be much better.

ext3 and ext4 have had some data loss bugs, yes, mostly due to new development rather than underlying faults, as far as I know. Certainly haven't lost any data to those two myself, although reiserfs bit me the one time I risked it, funny how he proved to be mentally unstable. XFS is not a Linux filesystem, you can thank Silicon Graphics for that. btrfs isn't even close to ready, anyone using it should expect total data loss and be happy when it's only partial.

Quote:

Before you call me an MS fanboy, I would pick FreeBSD (and probably OpenBSD) as better than both Windows and Linux from a data loss perspective.

And if I weren't doing other, specific things with my Microserver, it'd be running FreeBSD (via FreeNAS). It's a nice, solid, reliable OS to work with.

Quote:

I once pointed out to the admins of a dedicated 100% Linux shop that their most critical infrastructure ran Windows. When they thought I was BSing them I told them to do an OS fingerprint scan on their EMC SAN controllers. Windows handles critical data just fine.

It certainly does, when a company with tens of millions of dollars behind it writes a proper driver which bypasses damned near everything Microsoft wrote so it can do the job properly! Unfortunately, lazy development is a disease which infects the platform to the very core, especially on consumer hardware (which the Microserver is, without a doubt).

Quote:

I'm not the one yelling about how Windows sucks every chance he gets and blaming it for a problem it not only did not, but could not cause.

Windows is at the core of the problem. Along with the slow switch to 64-bit, and the continued existence of 16-bit software which should've been abandoned decades ago. Thankfully, that appears to finally be going away.

Bugs or not, Windows or Linux, if you cut the power ANY system can take a serious damage. That's why they use those APC units. And if administrated correctly, Windows and Linux/Unix are stable enough to be used in a productive environment.

If you don't exactly know what to do and what not to do with a given system, please don't blame any incidents on the OS.

Bugs or not, Windows or Linux, if you cut the power ANY system can take a serious damage. That's why they use those APC units. And if administrated correctly, Windows and Linux/Unix are stable enough to be used in a productive environment.

If you don't exactly know what to do and what not to do with a given system, please don't blame any incidents on the OS.

The HP N40L Microserver doesn't have a UEFI Bios. Although the HP N40L Microserver's BIOS could see the 3TB drives that's where I think the problem lies. With 2TB or smaller drives the HP N40L Microserver ought to be fine. Windows Home Server 2011 used with a motherboard that has a UEFI Bios for native support of 3TB hard drives ought to be fine too. If something in the chain of BIOS/ driver/ OS has a limitation this problem can happen when using a >2TB drive, even as a secondary storage or external drive it seems. Have a look at these links about 3TB drives and data loss. It seems fairly common. It's a frightening thought that you can have what is superficially a stable system which will lose all your data without warning.The first one is similar to what happened to the HP N40L Microserver after losing power, although the Microserver lost its drive letters too.

My friend's favourite saying about computer reliability is that it's a percentage game. All you can do is try to minimise the possibility of something going wrong.

This HP Microserver was intended as a replacement for an older NAS box. That NAS box was a Qnap TS-209 connected to an APC SmartUPS 750 UPS for battery backup in case of any power outages. Despite being connected to a UPS the external power brick of the Qnap TS-209 burnt out, turning off the device whilst it was running.

It's also not unknown for the UPS itself to fail. Earlier this year a company I do some work for had the battery of the UPS connected to their server die. The battery began leaking internally and the UPS stopped working altogether. The server was a Dell Poweredge 1800 running Windows Server 2003 with hardware raid 1 for its data drives. Despite having lost power the server booted straight back up again without any drama.

Bugs or not, Windows or Linux, if you cut the power ANY system can take a serious damage.

There's no denying that, but what he's been experiencing is not normal or in any way excusable. It's also not something you can fix while remaining on a closed system.

WR304 wrote:

The HP N40L Microserver doesn't have a UEFI Bios. Although the HP N40L Microserver's BIOS could see the 3TB drives that's where I think the problem lies. With 2TB or smaller drives the HP N40L Microserver ought to be fine. Windows Home Server 2011 used with a motherboard that has a UEFI Bios for native support of 3TB hard drives ought to be fine too. If something in the chain of BIOS/ driver/ OS has a limitation this problem can happen when using a >2TB drive, even as a secondary storage or external drive it seems. Have a look at these links about 3TB drives and data loss. It seems fairly common.

It's nothing to do with the BIOS (well, okay, the lack of AHCI mode is a problem. I fixed that on mine). It's pure OS/driver issues, and with a modern controller (AHCI based, essentially, unless you want to go proprietary) on a non-Windows OS, the problems just do not exist. With Windows, unless you're paying the big money to a company who will lock you in to their solution, you get to play mix and match until you get some configuration which works. Until you sneeze.

My understanding is that this problem wouldn't exist if the motherboard had a UEFI BIOS. As the UEFI BIOS natively supports drives larger than 2TB shouldn't the large drives work properly on a UEFI motherboard, when used with a current Windows operating system, without any drama or workarounds needed?

"Support for large disksBIOS systems support disks that use the master boot record (MBR) partitioning scheme. This scheme is limited to a maximum disk size of roughly 2.2 terabytes and a maximum of 4 primary partitions.UEFI supports a more flexible partitioning scheme called GUID Partition Table (GPT). GPT disks use 64-bit values to describe partitions. This scheme allows a maximum disk size of roughly 16.8 million terabytes and 128 primary partitions.

CPU-independent architectureAlthough BIOS can run 32-bit and 64-bit operating systems, during early stages of boot, it relies on a 16-bit interface called "real mode". This interface is based on the original Intel x86 processor architecture. All firmware device drivers (such as RAID controllers) on BIOS systems must also be 16-bit. This requirement limits the addressable memory to 64 kilobytes (KB) in the early stages of boot and consequently constrains performance.UEFI isn't specific to any processor architecture. It can support modern 32-bit and 64-bit firmware device drivers. The 64-bit capability enables the system to address more than 17.2 billion gigabytes (GB) of memory from the earliest stages of boot.

Flexible pre-OS environmentUEFI drivers and applications run in the boot environment with very few constraints. For example, UEFI can provide a full network protocol stack in addition to high-resolution graphics and access to all devices, even if no functional operating system is available.Because UEFI supports a flexible pre-OS programming environment, UEFI applications can perform a wide variety of tasks for any type of PC hardware. For example, UEFI applications can perform diagnostics and firmware upgrades, repair the operating system and notify technicians, or contact a remote server for authentication." UEFI and Windows Microsoft white paper Pages 6 + 7

"Support for large disksBIOS systems support disks that use the master boot record (MBR) partitioning scheme. This scheme is limited to a maximum disk size of roughly 2.2 terabytes and a maximum of 4 primary partitions.UEFI supports a more flexible partitioning scheme called GUID Partition Table (GPT). GPT disks use 64-bit values to describe partitions. This scheme allows a maximum disk size of roughly 16.8 million terabytes and 128 primary partitions.

This is simply recognising and booting from GPT. You can have any partition table you like, or none at all, if the BIOS, UEFI, or anything else is not to boot off of it, it is none of its business. All issues with drives over 2TB which are not used as boot devices are down to the OS and drivers, and occasionally hardware (some controllers do not comprehend this business of large numbers. They are designed by shortsighted fools, but that's nothing new in the computing world).

Quote:

CPU-independent architectureAlthough BIOS can run 32-bit and 64-bit operating systems, during early stages of boot, it relies on a 16-bit interface called "real mode". This interface is based on the original Intel x86 processor architecture. All firmware device drivers (such as RAID controllers) on BIOS systems must also be 16-bit. This requirement limits the addressable memory to 64 kilobytes (KB) in the early stages of boot and consequently constrains performance.UEFI isn't specific to any processor architecture. It can support modern 32-bit and 64-bit firmware device drivers. The 64-bit capability enables the system to address more than 17.2 billion gigabytes (GB) of memory from the earliest stages of boot.

Useful, but irrelevant to HDDs.

Quote:

Flexible pre-OS environmentUEFI drivers and applications run in the boot environment with very few constraints. For example, UEFI can provide a full network protocol stack in addition to high-resolution graphics and access to all devices, even if no functional operating system is available.Because UEFI supports a flexible pre-OS programming environment, UEFI applications can perform a wide variety of tasks for any type of PC hardware. For example, UEFI applications can perform diagnostics and firmware upgrades, repair the operating system and notify technicians, or contact a remote server for authentication." UEFI and Windows Microsoft white paper Pages 6 + 7

Again irrelevant to HDDs except for the easier (and hopefully less buggy, but that'll be the day pigs fly) implementation of fakeRAID to sell to consumers who don't know what they're doing. Oh, and maybe useful SMART tools can finally be present.

Could the same loss of GPT partitions on a >2TB hard drive when used with Windows potentially happen if the motherboard had a UEFI BIOS also then? I'd hope it wouldn't but it would be useful to have a definitive answer either way.

It's better to be clear on this sort of issue before you lose all your important files, rather than realise with hindsight that there was a problem.

Could the same loss of GPT partitions on a >2TB hard drive when used with Windows potentially happen if the motherboard had a UEFI BIOS also then? I'd hope it wouldn't but it would be useful to have a definitive answer either way.

I don't see why not. The BIOS didn't do anything, so why would using UEFI in an otherwise identical setup make any difference?

Set the box up with FreeNAS, use GPT partitions. Pull the plug (please ensure you've synced your disks first). See what happens.

I'm not the one yelling about how Windows sucks every chance he gets and blaming it for a problem it not only did not, but could not cause.

Windows is at the core of the problem. Along with the slow switch to 64-bit, and the continued existence of 16-bit software which should've been abandoned decades ago. Thankfully, that appears to finally be going away.

I'll agree with you partially that Microsoft was fairly slow to roll out 64-bit to consumers, but that was also due to internal development delays with Vista which delayed its release (Longhorn/Vista was slated to launch in 2003, not 2006).

One of Microsoft's strong points as a software developer has been the legacy support for customers. Every major operating system has a minimum support period of 7 years (XP was extended to 14 years for critical updates). Many businesses still run accounting or management software that's 16-bit. Even FreeBSD doesn't guarantee support beyond 2 years for major releases.

Monkeh16 wrote:

It's nothing to do with the BIOS (well, okay, the lack of AHCI mode is a problem. I fixed that on mine). It's pure OS/driver issues, and with a modern controller (AHCI based, essentially, unless you want to go proprietary) on a non-Windows OS, the problems just do not exist. With Windows, unless you're paying the big money to a company who will lock you in to their solution, you get to play mix and match until you get some configuration which works. Until you sneeze.

And then you get into the realm of whether or not your network card has proper support under BSD/*nix, when Windows drivers are freely available. And then you sneeze. If you're complaining about slow adoption of technologies, how long was it before wireless support was properly added in Linux distros? Power management has been a fairly large afterthought for BSD and Linux in comparison with Windows. My first Centrino laptop was a horrible mess when I first tried to install Ubuntu. All operating systems have had their fair share of issues when it comes to hardware.

It's funny that you mention "saving money" by not using Windows - how about time? Making the switch from Windows to Linux is not as simple as it seems, and in the end, you'll probably end up spending more time-value than the license cost. I don't want to turn this into an OS debate, but simply suggesting "don't use Windows" isn't always a viable or practical option.

The golden rule to realize in all of this is that RAID, in whatever form you choose to implement it (hardware, mdadm, windows, etc) is not a proper form of backup. It is intended to minimize downtime due to hardware failures.

One of Microsoft's strong points as a software developer has been the legacy support for customers. Every major operating system has a minimum support period of 7 years (XP was extended to 14 years for critical updates). Many businesses still run accounting or management software that's 16-bit. Even FreeBSD doesn't guarantee support beyond 2 years for major releases.

It's not a strength when it encourages people not to migrate away from software they should've stopped using many years ago. Putting it off makes things worse.

Quote:

And then you get into the realm of whether or not your network card has proper support under BSD/*nix, when Windows drivers are freely available.

I'm sorry you're still living in 1998, would you like to catch up to the modern world, where Unix networking is light years ahead of Windows? Wired drivers are very rarely a problem, IPv6 is lightyears ahead (not that anybody uses it yet, more encouraged laziness making things worse), the entire TCP/IP stack of any modern Unix-like OS (Slowaris, the BSDs, Linux, etc) is simply faster than Windows.

And as for 'freely available', I had to go to a third party site and manually locate a functional driver version for a wireless card for Windows just a couple of months ago. The manufacturer of the card does not supply a driver. The manufacturer of the laptop does not supply a working driver. Linux works perfectly right off the bat with it. And then you get the joy of having to get, say, Realtek drivers. Enjoy your 15kB/s.

Quote:

If you're complaining about slow adoption of technologies, how long was it before wireless support was properly added in Linux distros?

Yeah, it took a while, and now it's more reliable and doesn't require third party tools to have full support for all the hardware functionality.

Quote:

Power management has been a fairly large afterthought for BSD and Linux in comparison with Windows.

Desktop usage has been a fairly large afterthought. Meanwhile, you use Linux every day more than Windows and don't even realise it.

Quote:

All operating systems have had their fair share of issues when it comes to hardware.

Yes, but this isn't a hardware issue, this is yet another shortsightedness issue.

Quote:

Making the switch from Windows to Linux is not as simple as it seems, and in the end, you'll probably end up spending more time-value than the license cost.

I suppose that comes down to how capable you are of handling a non-point-and-click system and what your requirements are. You won't know until you try, blindly sticking with one solution because 'it works, mostly' gets you nowhere at all.

Quote:

I don't want to turn this into an OS debate, but simply suggesting "don't use Windows" isn't always a viable or practical option.

And still it's a valid suggestion and one you won't ever know the result of without trying it.

This requires a very small screwdriver to unlock the conductor pins from the connector and swap them around to the desired slots.

I have my replacement Scythe fan and jewellers screwdrivers, and I understand the changed pinout. I don't understand though how to use a screwdriver to unlock the pins - what manipulation of what is needed to do this little job please?

Look at the fan connector. On one side you will see 4 little metal pieces, one for each pin. Use something small to push one metal piece down. Pull the cable coresponding to that pin - and out it goes. Do this for the second one which you need to swap, out it goes. Now push the cable with the little metal pins back to the connector to their new position. Done. Kinda like http://www.youtube.com/watch?v=iUUxt6GBV6A , but one by one.

@Monkeh16 - First off - I'm going to preface this whole post by saying that I have tried to use Linux on the desktop several times in the past 10 years, and every time I've been put off by one thing or another that was simply missing or did not work as well as Windows. Your attitude towards my post suggests that you think I'm deeply attached to my Windows environment, but instead, like you pointed out, it's mostly that desktop usage for *nix has been a large afterthought.

Monkeh16 wrote:

It's not a strength when it encourages people not to migrate away from software they should've stopped using many years ago. Putting it off makes things worse.

What happened to Apple when they dropped OS9 and then PPC support? They angered a lot of clients. Angering clients is not good for business.

Monkeh16 wrote:

Realtek drivers. Enjoy your 15kB/s.

Realtek speaks for itself.

Monkeh16 wrote:

Yeah, it took a while, and now it's more reliable and doesn't require third party tools to have full support for all the hardware functionality.

I can't say anything for the last few years, but trying to get Ubuntu wireless working nicely on my EEE 1005 was more trouble than it should have been.

Monkeh16 wrote:

Desktop usage has been a fairly large afterthought. Meanwhile, you use Linux every day more than Windows and don't even realise it.

I do realize it. I run my own Debian Squeeze VPS for my websites, and I have a pfsense box running as my router. I'm well aware that most of the websites (including this one) are run on *nix. These are still not desktop scenarios. IMO, the desktop environment for Linux still has a long way to go.

Monkeh16 wrote:

Yes, but this isn't a hardware issue, this is yet another shortsightedness issue.

Monkeh16 wrote:

I suppose that comes down to how capable you are of handling a non-point-and-click system and what your requirements are. You won't know until you try, blindly sticking with one solution because 'it works, mostly' gets you nowhere at all.

And still it's a valid suggestion and one you won't ever know the result of without trying it.

You're suggesting that people move from one solution that "works, mostly", to another one that "works, mostly". Why would people do this? Personally, I could work all day in a bash console to do administrative tasks. I have dealt with a Fedora system as my primary desktop while I was doing my studies at UBC. I haveworked with Ubuntu on my laptops. I have administered CentOS and Debian servers. For a day-to-day desktop environment, I've made an educated decision to use Windows. This may seem like a misnomer to die-hard Linux fans (I always get flak for this), but I've spent far more time trying to get Linux to work nicely than the equivalent working cost for Windows licenses for all of my systems. Where's the value in that? I'm not saying that Linux doesn't have its merits, but blindly suggesting to people that they should switch is not always a good suggestion.

Linux on servers? Great. Linux on the desktop? Not quite ready for prime-time. Soon (hopefully).

@Monkeh16 - First off - I'm going to preface this whole post by saying that I have tried to use Linux on the desktop several times in the past 10 years, and every time I've been put off by one thing or another that was simply missing or did not work as well as Windows. Your attitude towards my post suggests that you think I'm deeply attached to my Windows environment, but instead, like you pointed out, it's mostly that desktop usage for *nix has been a large afterthought.

I was not addressing Linux as a desktop OS solution, but as a platform for a primarily headless fileserver. The primary usage of the hardware being discussed in this thread. Personally, I find I'm put off of Windows by one thing and five hundred others which are simply missing or don't work as well as a Unix OS.

Quote:

Monkeh16 wrote:

It's not a strength when it encourages people not to migrate away from software they should've stopped using many years ago. Putting it off makes things worse.

What happened to Apple when they dropped OS9 and then PPC support? They angered a lot of clients. Angering clients is not good for business.

And what's happened to Apple since they abandoned their legacy hardware and software? They've grown orders of magnitude, and managed to retain their primary customer base. People simply moved on from their legacy software.

Quote:

Monkeh16 wrote:

Yeah, it took a while, and now it's more reliable and doesn't require third party tools to have full support for all the hardware functionality.

I can't say anything for the last few years, but trying to get Ubuntu wireless working nicely on my EEE 1005 was more trouble than it should have been.

Then please do catch up.

Quote:

Monkeh16 wrote:

I suppose that comes down to how capable you are of handling a non-point-and-click system and what your requirements are. You won't know until you try, blindly sticking with one solution because 'it works, mostly' gets you nowhere at all.

And still it's a valid suggestion and one you won't ever know the result of without trying it.

You're suggesting that people move from one solution that "works, mostly", to another one that "works, mostly". Why would people do this?

Because, again, I am not talking about daily desktop activities. This entire thread is about a fileserver. In such situations, Linux or the various BSD flavours work. Not mostly, all the way. Everything you could possibly want in a more stable platform than Microsoft have ever or will ever create.

Quote:

For a day-to-day desktop environment, I've made an educated decision to use Windows. This may seem like a misnomer to die-hard Linux fans (I always get flak for this), but I've spent far more time trying to get Linux to work nicely than the equivalent working cost for Windows licenses for all of my systems. Where's the value in that? I'm not saying that Linux doesn't have its merits, but blindly suggesting to people that they should switch is not always a good suggestion.

At the end of the day, what works for you is what's best. I am merely trying to advise certain people in this thread to actually try an alternative platform for a non-desktop system.

Quote:

Linux on servers? Great. Linux on the desktop? Not quite ready for prime-time. Soon (hopefully).

Depends on your definition of prime-time. But again, we are talking servers here anyway.

Who is online

Users browsing this forum: No registered users and 1 guest

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum