what did you learn today?

I learned today, from Dreamworks IT, that earthquake trays for racks also help reduce vibration harmonics within the data center leading to increased HD drive life. We are now thinking about putting them into our data center where our SAN arrays are.

I think the subset of West Coast IT operations that actually tracks hard drive life with and without earthquake trays might be pretty small.

Environmental effects on hard drives can be pretty interesting. I learned from a former customer that, above a certain altitude, hard drive lifespans are nasty, brutish, and short. And there's always that youtube video that shows the performance hit when a guy shouts at his drives.

Today I learned you cannot upgrade an evaluation version of Windows Server 2012 to a retail production version of the same OS by simply adding the license key and attempting to activate *if the server is a domain controller*.

This is the sort of thing that happens when customers want infrastructure built out before they have keys in hand. "Install it in trial mode and then we'll add the keys as soon as the order comes through."

Mauna Kea. Apparently hard drives use aerodynamic effects to prevent the head from skipping merrily along the spinning rust. At least they used to, this was a decade ago.

It's fundamental to modern spinning drives. They can't be hermetically sealed, either, so the interface to atmosphere is heavily filtered. Solid-state disks obviously solve these problems, and are now much more practical for general use.

Makes me wonder if the latency from locating your storage at the base of the mountain would be a problem, or even noticeable in the face of caching.

That although with ex2007 sp2 or later, the exchange self signed cert used for inter server communications on the hub transport role server is supposed to have a 5 year validity date, ours only had 1 year.

Yup, so when the UCC cert was replaced, secure communication with the edge server broke, meaning email delivery broke.

Attempting to recreate the subscription worked great, except that didn't fix the authentication problem.

I learned today, from Dreamworks IT, that earthquake trays for racks also help reduce vibration harmonics within the data center leading to increased HD drive life. We are now thinking about putting them into our data center where our SAN arrays are.

Wouldn't every IT operation on the west coast know this?

Quite possibly, but I do not live on the west coast. I live and work in MN where we don't worry about earthquakes. It was just that I happened to be touring the DreamWorks data center and asked them about the trays.

Mauna Kea. Apparently hard drives use aerodynamic effects to prevent the head from skipping merrily along the spinning rust. At least they used to, this was a decade ago.

It's fundamental to modern spinning drives. They can't be hermetically sealed, either, so the interface to atmosphere is heavily filtered. Solid-state disks obviously solve these problems, and are now much more practical for general use.

Makes me wonder if the latency from locating your storage at the base of the mountain would be a problem, or even noticeable in the face of caching.

You still need disks of some kind at the telescopes. At the telescopes I worked at, data was transferred down to sea level on an hourly basis, but the computers at the summit still needed big disks in case the network went down. And astronomers (and various automated systems) needed a lot of disk for data processing and whatnot.

One of our instruments' data acquisition and reduction machines (it had eight) went through disks like nobody's business. The disks would either fail in the first week or last forever. When we brought the bad disks down to sea level they would work just fine, but since they were technically "failed disks", we wouldn't put them in any servers. I took one of these "failed disks" home and used it in my computer for five years.

One time we had a cleaning service come in and do some cleaning in our server room at the summit. To clean the carpeted tiles, they put down some powder and then vacuumed it up. That lead to a huge loss of disks and tape drives. I don't think we used that cleaning service again.

Mauna Kea. Apparently hard drives use aerodynamic effects to prevent the head from skipping merrily along the spinning rust. At least they used to, this was a decade ago.

It's fundamental to modern spinning drives. They can't be hermetically sealed, either, so the interface to atmosphere is heavily filtered. Solid-state disks obviously solve these problems, and are now much more practical for general use.

Makes me wonder if the latency from locating your storage at the base of the mountain would be a problem, or even noticeable in the face of caching.

You still need disks of some kind at the telescopes. At the telescopes I worked at, data was transferred down to sea level on an hourly basis, but the computers at the summit still needed big disks in case the network went down. And astronomers (and various automated systems) needed a lot of disk for data processing and whatnot.

One of our instruments' data acquisition and reduction machines (it had eight) went through disks like nobody's business. The disks would either fail in the first week or last forever. When we brought the bad disks down to sea level they would work just fine, but since they were technically "failed disks", we wouldn't put them in any servers. I took one of these "failed disks" home and used it in my computer for five years.

One time we had a cleaning service come in and do some cleaning in our server room at the summit. To clean the carpeted tiles, they put down some powder and then vacuumed it up. That lead to a huge loss of disks and tape drives. I don't think we used that cleaning service again.

Wouldn't a pressurized room* help in that regard?

*Or just an equipment cabinet if you didn't want to do the full monty.

One time we had a cleaning service come in and do some cleaning in our server room at the summit. To clean the carpeted tiles, they put down some powder and then vacuumed it up. That lead to a huge loss of disks and tape drives. I don't think we used that cleaning service again.

*Or just an equipment cabinet if you didn't want to do the full monty.

The annual operating budget for that telescope was something like four million dollars a year. Getting any sort of infrastructure upgrades like pressurizing the room or a sealed equipment cabinet would have been cost-prohibitive. This specific telescope is currently 34 years old. It was built before hard drives were even all that common.

finni wrote:

Carpeted.... wha? Never ever have I seen that.

Carpeted tiles over a raised floor aren't common? The only time I've seen raised floors in server rooms, the tiles have always been carpeted.

One time we had a cleaning service come in and do some cleaning in our server room at the summit. To clean the carpeted tiles, they put down some powder and then vacuumed it up. That lead to a huge loss of disks and tape drives. I don't think we used that cleaning service again.

Carpeted.... wha? Never ever have I seen that.

I've seen it.

Last place I worked, pretty much the entire interior of the building had a raised floor plenum, office space as well as server space (the office space where the plenum air was *chilled* was very much unpopular). The office spaces tended to have carpeted tiles. The active data center spaces tended to have regular (solid or perforated) tiles. Sometimes they got a little mixed.

Though I don't think I've ever seen someone carpet a data center on purpose.

CanSpice wrote:

Carpeted tiles over a raised floor aren't common? The only time I've seen raised floors in server rooms, the tiles have always been carpeted.

Carpet over raised floor for people, definitely no carpet where there is working equipment. Trapping and releasing dust, ESD hazard, etc etc etc.

I've never heard of carpet in a datacenter (where actual equipment goes). Besides issues of ESD, dust, etc. It is a pain to clean properly, makes it easier to lose small items like screws, and it is harder to move heavy equipment (carpet makes it hard for casters on equipment carts to roll properly).

Yeah I've never ever seen a carpeted raised floor, or in any datacenter for that matter.

I'm currently sitting just above a carpeted raised floor. The office space my cube is part of was built out of a former very large machine room area and the raised floor was retained. In my experience that's actually pretty common. The space underneath isn't actively cooled, fortunately.

Even WE don't have carpets in our data center We do have ESD mats, though.

There was one "server room" that had carpeting.

That's also the location where the very jr sysadmins tombstoned not one, but two different Domain Controllers and then kept turning them back on, when told not to. I wound up having to take them away from them physically.

You still need disks of some kind at the telescopes. At the telescopes I worked at, data was transferred down to sea level on an hourly basis, but the computers at the summit still needed big disks in case the network went down. And astronomers (and various automated systems) needed a lot of disk for data processing and whatnot.

One of our instruments' data acquisition and reduction machines (it had eight) went through disks like nobody's business. The disks would either fail in the first week or last forever. When we brought the bad disks down to sea level they would work just fine, but since they were technically "failed disks", we wouldn't put them in any servers. I took one of these "failed disks" home and used it in my computer for five years.

One time we had a cleaning service come in and do some cleaning in our server room at the summit. To clean the carpeted tiles, they put down some powder and then vacuumed it up. That lead to a huge loss of disks and tape drives. I don't think we used that cleaning service again.

At the time I was involved, there wasn't enough networking to get the data from the instruments down to a realistic level. Today there is--I think it's a dedicated fiber link--so most of the storage is at the base camp. But yeah, you still need storage at the telescopes, especially when the researchers need to process in real time in order to adjust the instruments on the fly. If they're not using SSDs up there by now, it's because they're underfunded. They killed a *lot* of spinning disks up there.

I'm having a hard time imagining a benefit to carpet on a raised floor in a server room -- noise abatement perhaps?

Our server room floor isn't carpeted but we used to have carpet glued to the walls for noise abatement. One of the first things I did after I started was to pull that disaster down and replace it with mover's blankets. They were both miserable mold magnets, but at least we were able to change the movers blankets out, instead of having to shut everything down and drag the servers out to steam-clean the walls. Explaining that to our customers was always fun, "Yeah, we're going to be down between 0000 and 0100 to steam clean the server room walls"

TIL that you should reboot a Citrix XenApp server. Never. If it's not going to f*ck up the load balancing because the server group will silently consider that all rebooted server are dead-dead-dead, it's going to mess up one of your STA so badly that, while you might have several servers for redundancy, the web interface and access gateway will choke on that one and refuse ALL connections.

I also learned that, in Citrix world, a log is some kind of wooden tree part that has nothing to do whatsoever on a server. No ma'm.

TIL that you should reboot a Citrix XenApp server. Never. If it's not going to f*ck up the load balancing because the server group will silently consider that all rebooted server are dead-dead-dead, it's going to mess up one of your STA so badly that, while you might have several servers for redundancy, the web interface and access gateway will choke on that one and refuse ALL connections.

I also learned that, in Citrix world, a log is some kind of wooden tree part that has nothing to do whatsoever on a server. No ma'm.

I could swear some Citrix Xen admins were posting about weekly scheduled reboots to help prevent issues?

I could swear some Citrix Xen admins were posting about weekly scheduled reboots to help prevent issues?

I don't know the specifics you're talking about but I rather doubt they are using XenApp 6.5 then: so far, we're up to 5 private patches applied to all servers to fix "rare" issues that more or less killed all servicing for several hours each time several workaround to disable functions that work fine but only up until they fail miserably. That's after 3 weeks in production for the new system (we rebuilt it completely, AD and all, and migrated users and data from a XenApp 4.5 farm)

And I can tell you that after fixing this morning's mess (and still not having found any reason why random STAs prevents the whole setup for working after reboot), I'm really not inclined to schedule weekly reboots.