Posted
by
Soulskill
on Tuesday September 04, 2012 @04:17PM
from the apocalypse-edition dept.

Barence writes "Microsoft has released Windows Server 2012, letting businesses test it for 90 days on the Azure cloud platform for free. There are two versions of the main edition of Windows Server 2012: one with virtualization support and one without. The former, the Data Center version, costs $4,809, while the Standard edition will cost $882. There's also an Essentials version, which replaces Small Business Server, for $501 per server, and Windows Server 2012 Foundation, which will only be available pre-installed on hardware."
Ars has a detailed look at the new edition.

Thank the FSM that I don't do corporate anymore but frankly that doesn't surprise me one little bit. If there is one sentence that would describe MSFT as a company under Ballmer its "doesn't get it".

I mean here they are, already behind the ball when it comes to server deployments (last numbers I saw had MSFT doing well with SMBs but large corporate deployments are down with Linux growing) and dealing with a more well known and popular product with VMWare so what do they do? Play a game of "let's gouge" and

Lots of this "depends". Microsoft has lots of SQL Server going, and owns the Exchange turf. There are lots of MS "business partners", developers, and so forth. They've come along way. No, there is no UI formerly known as Metro. They've updated lots of stuff, including Hyper-V. Is VMware an equal? VMware has egalitarian support for OS versions; MS is kinda sort trying to do better about that, but most organizations walk around Microsoft, rather than trying to make it play with other stuff.

Every so-called "partner" of MS walks around with their fingers crossed that they don't get big enough to be noticed by MS. Because once you get noticed, you are more likely to be wiped out by a vaporware announcement by MS than you are to even be bought out by them...

When the hell is Mozilla going to put that in the default en_US dictionary already?

I dunno (that word "dunno" is in the dictionary), having to add words to the dictionary is a double edged sword: On the one hand, I have to resort to a google search with define: [word] [google.com] to check the spelling before adding a word. On the other hand it artificially inflates my vocabulary ego.

You can tell a lot about a person from their personal dictionary (in your profile directory as persdict.dat). Here's a random sampling from mine:

Its also not new. 2008 had this licensing clause. They also allow you to use a single Enterprise license ($2k) to cover up to 4 instances, though unless you really need the enterprise features it doesnt save you any money over the $500 license (though I believe it comes with more CALs).

My Debian workstation with KVM allows unlimited virtualized Linux installs (any flavor) and cost nothing. I am free to run other OS under it as well, but have to license those separately. I was not forced to agree to be audited by the BSA commandos.

It would be a good idea for MS to offer something between all or nothing, such as a lesser cost for virtualization support without the included Windows licensing if (for example) you want to run Linux instances.

Each copy of Windows Standard includes TWO virtual instances for $800. Under the old agreement it was 1 License = 1 Copy.
Each copy of Datacenter includes UNLIMITED copies of Windows for $4800.
Or buy Essentials with NO virtualization for $500 (you can still run it on a virtual machine, but only ONE copy)

Depends on what you bought. Standard editions of Server 2008 and Server 2008 R2 granted two licences. One which could only be used as a Hyper-V host (for any number of licensed VMs) and one which could be used as a virtual guest on that same host. Enterprise editions allowed for one host and 4 guests on the same hardware, datacenter and Itanium allowed for unlimited on the same hardware. Web does not include Hyper-V and as such does not grant this licence.
It seems that they've done away with the Enterpris

Huh? Last I saw Linux (all variants) were somewhere in the 65% of web servers in operation right now.

No,65% of web SITES in operation are on Linux. There is a very significant difference as hosters are parkers are very much in the Linux space as Apache seems to run the massive hosting models better than IIS, I would guess Linux probably still has the larger server base for web servers, but that is only one fragment of the server market.

Only an idiot would pay the extra $$ for Windows Server (which isn't cheap), only to wipe it and install Linux. Typically, these users purchase a server with either a Linux distro pre-installed (such as RHEL), or no OS at all and install it themselves (usually the latter case).

Yet MS wonders why they have such a comparatively tiny market share of the server market...

According to this arstechnica article [arstechnica.com] (2011), Microsoft had a 25% webserver market share (IIS) as of 2010, and 15% as of 2011. For standard servers, they accounted for 71% of all quarterly server shipments (original source [idc.com], IDC). According to a survey in 2010 [securityspace.com] (the only one I could find on smtp market share, and was linked in Wikipedia), Exchange is the third most popular SMTP server (17%-- behind exim @ 34% and postfix @ 21%, and just ahead of sendmail).

You can call that many things, but "comparitively tiny" it isnt. Microsoft server is remarkably popular in SMB situations, and even in larger companies, and trying to write it off as irrelevant or whatever your angle was is silly.

Also silly is the comment about "code already there"-- EVERYONE does this, from RedHat to VMWare to Adobe any other company that sells multiple tiers of its software product.

That ars technical article makes the same mistake so many others do. It confuses hostnames with servers. It assumes a 1:1 ratio of servers to host names, and that is nowhere near the case. It also confuses "apache" and "iis" with windows and non-windows. There are lots of apache servers running on Windows out there (mostly because they have apps that require a java application server like tomcat and apache is typically used on the front end of tomcat, although IIS can be used as well).

Seems to me the issue you mentioned would skew it in favor of apache (it would over-estimate the number of apache installs), but honestly I disagree-- I think its reasonable to look at "number of webdomain instances" rather than fussing about the number of underlying OSes, which have become largely irrelevant in these days of "virtualize everything".

$4k to enable visualization support (that the code already is there for?)

Yet MS wonders why they have such a comparatively tiny market share of the server market...

This is incorrect the virtualization is free. (Hyper-V server anyone?)4k is for unlimited license on that server.If you run only 2 cores and less than 10 virtual servers, you will save money by licensing the standard version.

We're doing this right now, and it rocks. Someone says "we need a new Windows server". No problem-- roll out the VMWare template, 15 minutes later the server pops on already joined to the domain and activated.

As someone that uses both the shell and GUI config options, what's wrong with a choice? Sometimes configuring things through a GUI is faster. I'm all for that, especially if it can take less of my time.

For running scripted stuff, obviously the shell is better. Both are made for specific purposes.

do it once through the gui, pick up the powershell commands and then run them on as many servers as you need. It's really the best of both worlds and in many ways the object model of powershell is superior to the legacy shell environments.

That is up to you. There is no increased CPU count. Both Standard and Datacenter support 2 CPUs per license.
With Datacenter you get unlimited (Windows) VMs, so if you run more than 10 Windows VMs on a (2 CPU) box, it is cheaper.
For less dense virtualization, use Standard licenses, as each give right to two VMs.

License is also CPU (socket) based not core.so technically you can have 2x10 core (40 threads) to run lets say 5-40 vm's with the 4k license.no functionality is disabled from standard to datacenter aside from VM licensing.

The abstract is incorrect. Standard and Datacenter are now the same release with exactly the same functionality. The only difference is in the licensing. From the referenced article:

Functionally, Standard and Datacenter are the same. Even things like clustering, which used to be the sole preserve of the higher-end Windows Server SKUs, are found in Standard. The only difference is the number of Windows Server virtual machines supported per license.

So again: The only difference between the Standard and Datacenter is the licensing. Same software, two licenses.

The abstract is incorrect. Standard and Datacenter are now the same release with exactly the same functionality. The only difference is in the licensing. From the referenced article:

Functionally, Standard and Datacenter are the same. Even things like clustering, which used to be the sole preserve of the higher-end Windows Server SKUs, are found in Standard. The only difference is the number of Windows Server virtual machines supported per license.

So again: The only difference between the Standard and Datacenter is the licensing. Same software, two licenses.

We run a heterogeneous shop split about 50/50 between Linux (Debian) and Windows (2003/2008). Windows excels at certain things, Active DIrectory, and running.net apps delivered to us by various contractors. Our Linux systems run mission critical services as well as file-servers, and virtualization via VMWare's ESXi products (horribly overpriced but it's the situation that I inherited). I poke fun at the Windows guys fairly often and I get joked at in return, but the reality is that we all realize that it's about the right tool for the job. I don't have a single metal Windows install at home and I don't feel at all left out of the commercial loop, but like everything in life your own mileage will vary.

Be able to integrate in a supported manner with 95% of business workstations out there? Be able to create an incredibly easy to manage LDAP system that integrates seamlessly with Exchange? Provides Exchange?

* ReFS is lacking a few notable features, including file compression / encryption, sparse files, hard links, extended attributes, disk quotas, and others[1]. You could say that the only notable improvements over NTFS that it has would be much improved resiliency and higher capacity limits. You can't compare this to BrtFS. At all. The two aren't even in the same ballpark. ReFS is there to store millions of large files and managed bad blocks in a

So in other words, by your own description, things that you can already get in linux.

BtrFS has not been completed yet. ReFS is shipping. ReFS will not have all the features of the completed BtrFS, but for now ReFS offers features not available in any shipping Linux.

I don't think ZFS is production quality on Linux yet either. Storage Spaces under Windows is nor shipping.

Dynamic Access Control actually ups the ante for SELinux, grsecurity apparmor etc. While it still protects access to resources it does so based on potentially very fine grained policies which can express rules based on a very

It only took you three days. We were dealing with a screwy Microsoft Lync mobility issues whereby the iOS client just wouldn't work (but every other client under the sun worked). The only odd-ball thing about our setup was one of the four servers (at least four are required for any Lync deployment) was a Linux box acting as a reverse proxy. We opened up a ticket with Microsoft on April 30, 2012. The time spent with them since is a waste of time:

* We repeatedly requested the actual HTTP request/response data from the iphone's perspective, annotated with notes on how it differs from what the iphone expected. Every time we requested it, they provided us with the client's general iphone debug log (which was useless to us), even though we explained that it doesn't fulfill our request.

* We asked for details on what is expected of the Lync reverse proxy. They provided us with instructions on how to set up TMG. We replied that the provided information did not fulfill the request. Their response was a shrug and another link to the same instructions.

* We asked if there was anything specific to the iOS client that required ISA or TMG. They demurred on it, refused to research it, refused to acknowledge the bug for *four* months. I'm not exaggerating. It was August 31 when we inferred from the continued back and forth that the only way Microsoft can hope to grasp the problem is to make the reverse proxy an ISA server.

From this, I learned that Microsoft support really isn't much better than doing it yourself. They have no inside tricks, they have no way of getting a guru to weigh in on anything, and they hope that by sending you the same wrong information over and over they won't have to acknowledge faults in the product.

For my part, calling Microsoft support isn't an option any longer. It is a waste of time and money that could be better spent solving the problem myself.

Well, if it's like the OS X client, then it's written by Microsoft. I have an issue with MS Lync client on OS X where all video is being handled on the CPU instead of GPU. And Lync is the only program I have that issue with. Hmm...

Personally with all of the Enterprise level support I've dealt with (e.g. IBM, EMC, HP, Dell, Oracle, CA, etc.) Microsoft is among the best.

(I'm talking Enterprise support, as in paying 7 figures/yr for licensing and support. Not calling an 800 number to India for someone to tell you to reboot your computer as you would get from a el-cheapo Wal-Mart laptop.)

Don't get me started on Oracle. Most of the time the problem I'm calling about is less painful than dealing with Oracle support.

I couldn't agree more. I've never experienced support like we get from MS. Just recently we had a 12GB.dmp file analyzed and in less than 36 hours they were able to tell us which shitty 3rd party driver was causing our boxes to BSOD. The vendor that shipped this driver (mentioned in the parent above) is so far, completely useless.

True, but I can't recall the last time I ever had to call RedHat for anything. At all. Closest I ever came was when a DBA wanted some custom tweaks in RHEL, and some kind soul put the best ones to dig into (with full explanations) on Oracle's KB site (yeah, I know... bet the devil got hypothermia that day too).

Microsoft OTOH, especially for bugs that aren't (yet?) in the KB? hoo-boy.

If all they need is a file server and theyre happy with a workgroup, theres no reason to do with Windows Server at all-- there are many NASes out there that will fit the bill, or you could build your own and stick some distro on it (not like theres a shortage of SOHO fileserver distros out there).

That's just it. There is no functional difference now between Standard and Enterprise. They all have the exact same features. The only difference is how many virtual machines you can run. $800 for standard vs $4000 for enterprise.

Sure, but right now you can get server core and hyper-v standalone and run many virtual servers, but in the future if you want to stay current, at some point you're going to have to pay through the nose.

That'll be why the world runs on Windows servers and no-one would think of putting any critical service on Linux.

The Oracle world (big business, government) is definitely running on Linux instead of Windows. With the decline of Unix running on "big iron", with the exception of IBM's RS/6000 and AIX being the last holdout, everyone is moving their enterprise, mission critical apps to Linux. Especially with Oracle themselves releasing a tweaked version of RHEL, Linux is an "officially supported" platform that even satisfies the corporate PHBs and bean counters.

I make a pretty good living porting Oracle enterprise databases and apps to Linux. Just a couple weeks ago, we ported a Windows-based Oracle WebLogic middleware server from Windows to OEL Linux running on the very same piece of hardware, and got a tenfold boost in performance. With results like that, business loves Linux now.

Granted, only server-side things on Linux are welcome in the business world. The desktop will sadly *never* be adopted in any significant numbers in any enterprise. All because Windows and Active Directory rule that market segment.

For server functionality pure bullshit. I have a decade's experience running Windows and *nix servers, often in the same networks and while Windows has AD and GPOs to its benefit, in other respects it is horribly backwards and painful to use. Just backing up the system config in Windows is appallingly difficult compared to *nix.

For server functionality pure bullshit. I have a decade's experience running Windows and *nix servers, often in the same networks and while Windows has AD and GPOs to its benefit, in other respects it is horribly backwards and painful to use. Just backing up the system config in Windows is appallingly difficult compared to *nix.

So, how does Linux handle online backups of running server workloads? Does Linux have a way to signal to running services (like RDBMSs, hypervisors, file servers) that a backup is about to happen, negotiate which files are to be included in the backup and then in a fragment of a second work with the running service to synchronize disk content so that the backup will be consistent?

A running database server will almost invariably hold some state in memory. If the power is lost it will be able to rebuild from the disk state, but that can be a time consuming task. If the backup system is simplistic it will just back up the disk state of any file. Upon restoring it will appear as if the power was lost and the roll-forward log will have to be played.

A more advanced backup system will integrate with the services to ensure that for a very brief time (just enough to take a snapshot) the disk state is consistent and thus will not require a rebuild/roll forward if it is ever restored.

Windows comes with Volume Shadow Copy Service (VSS) and a file system which supports block level snapshots. VSS works with VSS aware applications (VSS writers) such as Microsoft SQL Server, Oracle Database Server, Exchange Server, Active Directory, NTFS and Hyper-V server. When a service is a VSS writer it participates in VSS coordination/synchronization to create consistent disk state.

It even works through Hyper-V: When you back up the Hyper-V host, Hyper-V itself is a VSS writer which recursively invokes the VSS running inside guest OSes (if Windows) to ensure that any service inside the Hyper-V guest OS is also disk consistent exactly when a snapshot of the virtual hard disk image is created.

To my knowledge, Linux doesn't have anything like VSS. Which means that each application/service must be handled separately. Typically you will stop the service during the backup. Some services such as PostgreSQL can recover from a non-consistent disk image; others can not. Individual applications may have commands/services which allow admins to "dump" state to a file to be backed up separately. All in all reliably backing up a running Linux server is more complicated compared to backing up a running Windows server with VSS aware services.

I find backing up the registry in a fashion that allows me to easily restore configurations a real pain. NTBackup and it's descendants are hardly backup wonders. Configuration via text file is infinitely easier to deal with than binary hives.

I don't even bother restoring failed domain controllers any more. I have other DCs replicating AD data so I just build a new server, promote it to a DC and let replication do the heavy lifting. Hrlluva lot easier than what passes for bare metal recovery in the Windows world.

Dunno - I find it infinitely easier to just restore the VM from snapshot, and failing that, restore from the SAN snapshot, and failing all that, restore that VM straight off of tape (the last bit may be a hair outdated, but it still works).

The days of restoring a server on bare metal ended a long, long time ago for me. Kinda glad to see the $#@%(*! concept dying off.

The thing with *nix, or at least any version I've worked with, the functionality is already there. Configurations are almost always in human-readable text files, and I have a toolset that has been around in one form or another for decades to work with those files. I can easily make backups of daemon configurations, and indeed have been able to restore a server with the contents of/etc and the data files.

That's utter nonsense. Windows Server Backup is about a billion times better then NTBackup. Pure image based backup, allowing multiple versions of files to be stored, Exchange aware, SQL aware and allowing individual files to be restored, easily. I would use WSBU over NTBackup any day of the week (and do). It works every time - and offers damn near instant bare metal recovery of corrupted servers. NTBackup, on the other hand, required you to rebuild from scratch and then manually restore files, apps, etc, painfully.

Just because you never learned how to use a tool doesn't make it bad. It is trivial to configure WSBU to backup individual components, such as system state, volumes or yes, even individual folders. Again - *you* not knowing how to do something doesn't make it impossible.

And for the obligatory Slashdot 2012: no, I am not paid or affiliated in anyway with Microsoft. Sometimes people like the changes they make because they actually tried them and found them better.

Last time I looked at it was when Server 2008 was released. This isnt an issue of "didnt take the time to learn it"-- at that time, the official stance as given on the official Exchange team blog was "it was crippled it so that noone would make the mistake of using it for business". To reiterate-- this was the OFFICIAL exchange blog, ie microsoft employees.

Its entirely possible that in the time since they have corrected the issues I mentioned, or brought it back as something new-- but they definately DID cripple the built in backup on the release of 2008. Im not sure how possible it would be to find that article as it was a blog entry and it was 5 or so years ago, but Ill give it a shot and post it here if I do manage to find it.

I found it:Windows server backup not exchange aware [technet.com]We have decided to develop and release a VSS-based plug-in for Windows Server Backup that will enable you to properly backup and restore Exchange 2007 with a built-in Windows 2008 backup application.While you will be able to backup and restore Exchange 2007 on Windows 2008, you should not expect feature parity with the Windows 2003 NTBackup experience.

There are lots and lots of other posts on this. More to the point, the features you mention are brand new as of R2 [microsoft.com]-- they were not there in the original release:Windows Server Backup in Windows Server 2008 R2 includes the following improvements:More flexibility in what you can back up. Windows Server Backup enables you to back up selected files instead of full volumes. You can also exclude files based on file type and path.

That is, you simply couldnt do this prior to R2, which, along with no tape and no exchange support, made it utterly fall off of my (and many others') radars as utterly irrelevant. Basically all of the cool features you mention simply werent there in the initial release-- it was a straight dumb "image the whole box or nothing at all" program, except it wouldnt even work if you had stuff like Exchange or HyperV and no VSS plugin.

Not only that, but even if I had noticed that release-- which TBQH i did not-- NTBackup was already such a disaster that I would be hesitant even now to return to something like WSB.

It sounds like your experience is mostly with Win Server R2 and above, which is fine; if thats true, just keep in mind that there are a lot of us with horror stories of NTBackup, and that WinServer2008 was not always as polished as it is now.

Active Directory is worth the price of Windows Server alone, and I say that as a Linux sysadmin who's implemented an OpenLDAP infrastructure (everything from AuthZ/AuthN to Puppet ENC backend to a single point of truth for Nagios). AD is miles away from anything any Open Source or Apple product has ever implemented.

Sure, a Mac Pro or a Mac mini + external Thunderbolt RAID may serve fine as a pedestal server. But I was under the impression that only Windows, Linux, and the like ran on rackmount hardware now that Apple has discontinued Xserve. Or has it already become common practice to put pairs of Mac mini computers into 19 inch racks [amazon.com]?

where are the dual PSU's and hotswap HDD's?

the mini does not even have a easy to get to HDD (next to all other desktops) in it.

On a serious note, though, you actually can run POSIX apps on Server 2012. NT has, since its inception, included support for POSIX APIs and filesystem behavior. These days it's called SUA (Subsystem for UNIX Applications) and a smallish but fully functional operating environment for it, called Interix, is available for free. The installer will also let you enable various tweaks such as SetUID/SetGID behavior and filesystem case sensitivity, things you can't get with Cygwin or the like. It's implemented as an NT subsystem, same as Win32, so the speed is basically native as well. Interix comes with a working build toolchain, plus you can get a package manager for a repository of precompiled software and updates from http://suacommunity.com./ [suacommunity.com.]

I'm not sure I'd advocate adopting it at this point if you haven't already - MS has been making moves toward discontinuing support for some years now, and it appears to no longer be in any of the client editions but Enterprise - but it exists, and it works. MS themselves used it to host Hotmail on Apache before they ported it to run on IIS. I use it (on client) both for various utilities that I prefer the POSIX versions of (git and ssh and such, plus sometimes there is no Win32 version) and for bash (my primary shell).