My workaround: switch to another browser. Then if login times out there, I have to switch to yet a third browser... or a different computer. Even clearing cookies AND rebooting the RAC doesn't fix it.

[*]Card is visible, and can be logged into, but any attempt to access the KVM page takes me directly to the login page.

[*]Even when working, the KVM viewer page gives files with textual vomit at the end of the file extension.viewer.jnlp(192.168.1.11@0@1319314256982)viewer.jnlp(192.168.1.11@0@1324063258059)viewer.jnlp(192.168.1.11@0@1324063290491)viewer.jnlp(192.168.1.11@0@1324063329510)viewer.jnlp(192.168.1.11@0@1324063545618)viewer.jnlp(192.168.1.11@0@1324063547061)viewer.jnlp(192.168.1.11@0@1324063547215)viewer.jnlp(192.168.1.11@0@1324063547355)Windows doesn't know how to open a .jnlp(192.168.1.11@0@1324063547355) file... nor does Gnome.

Who do we talk to to get this crap fixed? The Proliant chat people say they're just for hardware-replacement incidents. Too bad the microserver entirely lacks a Super IO chip, so there's no way to get a real serial port. (PCIe serial port doesn't work in Grub, and doesn't do console redirection.)

Oh yeah, and my fan is noisy, in such a way where it'd be less annoying if it spun FASTER. It's all bearing noise, and revs up and down continuously, just enough to be annoying.

I have been waiting silently for almost a year now in hopes these issues would be resolved. As of now I have 3 inaccessible cards which have dropped off the network and not responding to anything via ipmitool. Another few machines with these cards are working OK. As of right now there is one key difference between those machines where it is working fine and those that it is not: 01:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection

I do seem to recall it happening on my microservers with no secondary card installed however as it's so random and hard to reproduce it's very hard to know for sure.

FYI on cards that have dropped of the network the self-test via ipmitool worked fine initially returning 55 00 or 'BMC all self test pass'. Upon running 'ipmitool sensors' the card died and I can no longer speak to it. However, impi sensors works fine for those cards that are still on the network

Here's what I ended up doing: I bought a USB-to-TTL adapter, wrapped it in electrical tape to avoid shorting, and installed it inside the microserver. There's an unmarked TTL-level (3.3v) serial port (115200n8) on a 4-pin header near the ethernet port (on the management card). Looking at the pins, it was something like power (3.3v), tx, rx, gnd -- only the ground was obvious, and I'm not sure on the order of tx and rx.

So, I now have an internally-installed serial console to the remote access card itself. If my remote access card dies while the server is still alive, I can fire up "screen" on the server, log into the card itself, and reboot it. The root password is "root", by the way.

EDIT: Also, where do we holler to get the **** firmware fixed? As it is now, the management card is unfit for its stated purpose.

I've been running my N40L happily for a fortnight or so now, but decided to install a 6450HD gfx card to help with a few tasks. Now this is my first home server, admittedly, and certainly it's taking a fair bit of getting used to WHS 2011 and the idea of a headless PC, but I have installed tens of GFX cards in the past with no problems. Not this time.

The server works fine, and at the moment I have it directly connected to a small monitor, because I'm doing a lot of fiddling about and it saves time.

Howeve when I installed the GFX card it appeared to boot up fine, and I dawdled upstairs to my office. I was able to remote connect in fine, and the GFX card showed as working.

However if I try to use the server's monitor I'm unable to get it working at all. The Acer screen connected directly to the DVI port just doesn't detect a signal at all. However if I remote in, it shows all drivers working fine. I'm unable to change the screen or resolution via remote, however, as that's just how WHS2011 seems to like it.

Any suggestions? The only things I'm able to think of (which are clutching at straws) are:

- Since the motherboard has a built in VGA socket is that conflicting in some way? I checked in the bios and there were no obvious settings for turning off onboard GFX- I'm having to convert from a DVI output on the card to a VGA input on the screen. I've tried many different cables and connectors. Other PCs work fine this way.

If you have any suggestions I'm all ears! BTW I used the Windows 7 drivers, as that's what most forums seemed to suggest. I notice that I can't uninstall the drivers from the Programs CP in the way that you can with some graphics drivers.

- Since the motherboard has a built in VGA socket is that conflicting in some way? I checked in the bios and there were no obvious settings for turning off onboard GFX

I am pretty sure there is a BIOS setting for setting the on-board graphics to "auto-detect" or "disable". I don't want to reboot my server right now, but I'd suggest you check again and see if you can explicitly disable the on-board graphics. (From what I recall, the setting I'm thinning about should be on the same page where you set the video memory limit for the on-board GPU.)

I'm looking at speccing up a small machine to host our QuickBooks and Fishbowl Inventory servers, which are currently running on a hosted cloud server over an internet link of dubious reliability and unreasonable expense.

Since the only task I'll be throwing at it, for now, is a 50 MB DB, I'd like to put an SSD in there for the OS, apps and DB. The apps and DB only take up about 2 GB, so the OS is the only question. What's a safe number for Server '08 Foundation? Will I be OK with a 128 GB SSD?

Is there a form factor issue/mounting kit I need to consider? I also see mention in the thread start that the SATA cable is smaller, but do I care? Is all of this pre-wired into the bay anyway?

Onto OS's. Nowhere on HP's site can I find out what comes on this thing, if anything. There's brief mentions of SBS Essentials, but that's literally the extent of it. Is that pre-installed? Is ANYTHING pre-installed? There's also a reference by a user to some sort of USB stick for the OS, is that just the install image? Is that standard?

Finally, what does the Remote Management gizmo do, exactly? I can't find any explanation of it even after some reasonable adept Google-Fu.

By default, no OS comes with it at all - no install media, nothing preinstalled or anything. That wasn't a concern when I've ordered a couple of these since we were site licensed for Windows servers & clients, but you'll have to budget for that. 128GB would be plenty even for a full install of '08 Standard (provided you didn't need dedicated space for file server duties).

i had two data disks in one of my microservers, in a mirror (raid 0, i guess). i wanted to move the disks to a different microserver that was built with a newer version of fedora. the newly built box has two os disks mirrored, like the server that the data disks were coming out of. so, when i put the data disks into the newly built server, and went into the raid controller menu, the mirror was automatically created and the data was still intact.

i certainly was not expecting this. in fact i had an offline backup that i was expecting to have to restore from. all i had to do was add the volume group and logical volume in the os and mount the disk. nice surprise that it was that easy.

I'm seriously considering getting one of these to finally retire my old P4 box running FreeBSD and ZFS...

So, anybody has FreeBSD running on this beast? Are there any issues with driver support or should I be able to slap FreeBSD 9 on this sucker andbasically, spent the rest of that evening with that big, old smile upon my face?

I'm planning a build of one of these over the next few weeks with FreeNAS (likely) and some flavor of ZFS. Trouble is, I can't find one for sale at the moment...the Egg is sold out and Amazon has them through 3rd party sellers only @ $400.

Doh!

How many disks can I cram into this thing? I'd like to run more than 4 if possible, and I don't need a optical drive...

Somewhat OT but:My requirements are as follows:1) Crashplan compatibility.2) Reasonably easy setup and decent GUI (I can RTFM but I have zero practical *nix experience).3) Build longevity. I want this build to last at least 5 years (storage space excepted) without any needed HW upgrades, and I won't be touching the SW either if it just works.

How many disks can I cram into this thing? I'd like to run more than 4 if possible, and I don't need a optical drive...

I've set up a couple of them with 5 disks, with the extra one in the optical bay space. You'll need to get a drive mounting bracket & extra data/power cable. I was worried about vibration/noise from that drive, but it turned out there wasn't any added noise from it that was discernible to me.

As I said before, 9 drives internally is possible. 4x3.5" in the normal bays, 4x2.5" in ODD bay off a controller card, one drive taped to the top of 4x2.5" box. That's before you hit the eSATA connection on the back or USB.

So, anybody has FreeBSD running on this beast? Are there any issues with driver support or should I be able to slap FreeBSD 9 on this sucker andbasically, spent the rest of that evening with that big, old smile upon my face?

The one thing that's not working well for me is the IPMI / remote access card: There are some incorrect entries in the MicroServer's DMI table that prevent the FreeBSD IPMI driver from finding it. (On top of that, the card sporadically becomes inaccessible via the network; it's definitely nice when it works, but given the issue's I'm not sure I'd shell out the money again.)

Can't you specify an alternative port for the IPMI driver in FreeBSD as you can with Linux?

ports=0xca2 solves the problem.

I too have the problem with the NIC on the remote access card die from time to time. The board itself still works as it responds to ipmi commands. Seems like a memory leak/bug in the firmware since the fault can be repeated by leaving a virtual KVM session open overnight. Spoke to HP support at some length about it - got the usual muppets who didn't really understand the nature of the problem who's only solution was to send out an engineer to replace the card - too much hassle for me at this time so I closed the case.

Can't you specify an alternative port for the IPMI driver in FreeBSD as you can with Linux?

ports=0xca2 solves the problem.

I too have the problem with the NIC on the remote access card die from time to time. The board itself still works as it responds to ipmi commands. Seems like a memory leak/bug in the firmware since the fault can be repeated by leaving a virtual KVM session open overnight. Spoke to HP support at some length about it - got the usual muppets who didn't really understand the nature of the problem who's only solution was to send out an engineer to replace the card - too much hassle for me at this time so I closed the case.

Is this with the OpenIPMI driver on Linux? Unfortunately, it seems the FreeBSD driver ignores device hints if it finds a valid entry in the DMI table.

I can use the FreeIPMI tools to connect to the card even when it doesn't respond to network access, but even issuing a cold BMC reset doesn't seem to bring the network back up (nor does anything else I've tried other than physically unplugging the server). For my purposes, this is not too much of a problem, but for anybody who actually has to rely on remote access, I would imagine this is kind of a deal-breaker... Which makes me wonder how they can get away with just not bothering to fix this.

It's the chipset I'm talking about as the main advantage. ATI's chipset is, well to be frank, a bit rubbish, at least on the NL36 Microserver I have. It's not much better on the NL40 Micro I have at work. The SATA is pants, and the USB is pretty much useless for anything other than keyboards/mice. Neither can sustain writes at 80MB/s across gigabit to the SATA RAID - something my similarly priced Celeron G540 mITX server at home can do without breaking a sweat.

The CPU is just one part of the whole package - yes, the Turion's CPU and GPU are probably much better than Atom, but the chipset lets the APU down a lot in my experience.

It's the chipset I'm talking about as the main advantage. ATI's chipset is, well to be frank, a bit rubbish, at least on the NL36 Microserver I have. It's not much better on the NL40 Micro I have at work. The SATA is pants, and the USB is pretty much useless for anything other than keyboards/mice. Neither can sustain writes at 80MB/s across gigabit to the SATA RAID - something my similarly priced Celeron G540 mITX server at home can do without breaking a sweat.

Running the same software in all cases? If so, which? What's the CPU load? That Celeron is substantially faster than the Atom and the Turion, so depending on what software you're using, this may well explain the difference.

In any case, the SATA controller itself seems just fine here on an N36L; I can easily saturate all four drives simultaneously (locally, of course), so it's clearly not an issue with the chipset's internal bandwidth. (Then again, I'm not actually using the RAID feature, so perhaps it's that. Why anyone would use firmware RAID over software RAID in the first place, though, is beyond me).

Another aspect is the NIC (which is not part of the AMD chipset), though even the el cheapo Realtek NICs can saturate Gigabit nowadays, so I'd be surprised if the Broadcom controller in the Microserver had trouble with that.

Quote:

The CPU is just one part of the whole package - yes, the Turion's CPU and GPU are probably much better than Atom, but the chipset lets the APU down a lot in my experience.

All of the above is IMHO and may not represent any reality.

The Turion is not an APU; the GPU is actually in the chipset. I'm not at all convinced that what you're seeing is actually a chipset issue, though.

i am using a 64 bit OS, and when i try to use the virtual media functionality, i get "the virtual media native library cannot be loaded". from the one thread i could find relating to this, its because that does not work with 64 bit icedtea. i would have to uninstall java/jdk/icedtea and install the 32 bit version in order to have it work. what the crap is this?

It's the chipset I'm talking about as the main advantage. ATI's chipset is, well to be frank, a bit rubbish, at least on the NL36 Microserver I have. It's not much better on the NL40 Micro I have at work. The SATA is pants, and the USB is pretty much useless for anything other than keyboards/mice. Neither can sustain writes at 80MB/s across gigabit to the SATA RAID - something my similarly priced Celeron G540 mITX server at home can do without breaking a sweat.[/size]

I suspect you were running into something other than a problem with the SATA chipset. I can just about saturate a gigabit link(~100MB/s) with writes on my N36L using simpler protocols(NFS v3 and iSCSI). It only struggles if I use something a bit more heavy like NFSv4(~85MB/s) or CIFS(~55MB/s), and that's entirely down to the CPU.

However, I'm singularly unimpressed with the newer MicroServer. Unless it's going to attract the same rebates as the previous generation, I'd much rather pay the minor premium and go with something based on the S1200KP and a Gxxx/i3 CPU.

These seem to have done well for HP as relatively inexpensive, efficient, compact servers that can pack a healthy amount of storage and RAM into a small package. If they were more expensive, the most likely result is that they wouldn't sell as well to the people who have been buying them thus far.

The MX130 is a nice machine, but where the UK got huge discounts on the HP and the US didn't so much, the opposite is true of the Fujitsu. They start out at £300+ for the basic dual core with no drives, which is over three times more than what I paid for my Microserver after rebate.

In case anyone has missed it - version 1.3 of the firmware for the remote access (BMC/IPMI) cards has finally been released. It supposedly fixes the issue where the NIC becomes non responsive. When I reported this problem a long while back I told the HP people it seemed very much like a firmware bug yet they insisted it was a hardware fault.