CLIFF REEVES: Thank you for the applause and doing it at the beginning. It so seldom happens at the end.

Okay, so what I’m here to do is first of all thank you very much for attending MEC. Thank you very much for attending the second day keynote. And what I’d like to do is to take you through where we’re going with the Windows servers and the whole Windows servers family, and Paul gave you a comprehensive view of the .NET server family yesterday, so I’ll touch lightly on that, but I’ll focus mostly on the underlying Windows server, where it’s going, and I kind of want to address it from two perspectives: One is the kind of problems we think that we’re dealing with today, the challenges that we’re dealing and you’re dealing with; and then maybe dissect the problem into sort of three parts by looking at the people who care about the server and why do people buy them, how do they use them and what challenges are they faced with them.

So let’s start. And Paul showed you this yesterday as sort of the enterprise reality, and all I wanted to point out is you’ve seen the story. I mean, every conference you’ve ever been to as IT professionals says all the following: challenges, application backlogs, complexity, blah, blah, blah. And if they’re all true, of course, and they’re all true in their own time, but I do think that at this point we’re now at the sort of cycle in business where connectivity breakthroughs, bandwidth breakthroughs, lower cost of computing and so on is forcing us and also giving the opportunity to sort of reach everybody that we possibly can. So it’s no longer just a priesthood of employees, it’s no longer just the employees in the company; it’s partners, it’s suppliers, it’s customers, and in fact it’s potentially every consumer. And one way or another we have both the opportunity and in some cases the obligation to have our systems reach out and touch those people.

So that’s an incredibly complex environment from the point of view of what does the software look like that serves those diverse needs. What does the connectivity technology look like that meets that problem? What does security look like in an open environment like that?

So that’s what we’re going to talk about today, and I want to talk about it from the point of view of three interesting groups of users. The first is sort of in a sense the traditional people we think of when we first think of deploying a server system, the IT professionals, the people who approve it, the people who run it, the people who guarantee service levels against it, the people essentially who have to live or die by the decisions they make that that’s the system they want to deploy.

The second group, we’ve always looked at those as smaller, narrow groups, the people who choose a server because of the development environment that it offers or their ability to do development and target that server environment; how rich is that, how powerful does it make them, how flexible are the applications they produce in that environment.

And also there’s a group that is now adding more and more bonding with the face of the server, and it used to be that joke, that cartoon in The New Yorker that everybody talked about that said, you know, there’s actually a dog sitting at a keyboard and the cartoon is saying, “On the Internet no one knows you’re a dog.” And, in fact, in many cases on the Internet in the world of the Web no one knows or cares what server you’re running, except — and that’s truly when you’re purely Web access, but increasingly we’re starting to see the server become an incredibly important place for knowledge workers to meet. And I’ll go into that a little bit later, but they’re emerging as a power group, a new power group in terms of choosing the server and deciding the kind of function that resides on it, beyond the application. They care about how it works, how it works for them very directly, just as much as they care about their desktop.

So let’s go through these groups one at a time.

First, the traditional group, the IT pros, you guys, for the most part I think: So, how do we learn about what makes a great OS, server OS in the enterprise? And we do it by essentially watching the enterprise. And one of the things that we did in the early deployment of Windows 2000 was recognizing that they were going into mission critical applications — e-mail, you know, messaging, collaboration — but also going in and they’re running the ERP applications, supply chain management systems, customer relationship management systems and sales force automation. They were really beginning to be in the data center and running the core. If it went down, it was unreliable or unpredictable, then essentially companies didn’t do business well.

And what we did was we brought in with some of our customers, companies like Starbucks — hell, we drink most of their coffee I think at Microsoft — Starbucks and the Gap, and we brought their systems in completely, intact, and cloned their environments, cloned the database sizes, cloned the patterns of access and the load that their systems produced and we cloned every single thing down to the hardware choice, the interconnect technology, the kinds of storage they purchase and we put those systems and subjected them to exactly the same load that those companies saw them when they were running in production.

And we learned a tremendous amount about that, and what we learned to do — what we actually learned was first of all that as we looked at more and more applications, the topology of a particular data center structure fell into a rough number of patterns — not that everybody’s was the same, but that we could start looking, and you do things enough times and you start to see abstractions there and you start to see sort of images that repeat themselves, and we saw patterns of deployment. We saw departmental data centers. We saw enterprise data centers. We saw the Internet data center and we saw geographic high availability clusters in some cases. You can think of it as not necessarily increasing degrees of complexity but there’s sort of some really different things going on in there. At the high end, of course, you’re absolutely concerned with full availability all the time, worldwide international loads, load balancing and so on, and down in the lower end maybe you’ve got fewer concerns with access control, but you’re much more concerned with rich function and rich interaction from users.

So we took each of those architectures and we started producing what we called reference architecture guides from those. And reference architecture guides are nothing more than descriptions of data center topology — firewall here, load balanced Web blade front end here, business logic here, transaction database system here, security policies as follows, and firewall policies as follows, storage managed in such and such a way — pretty descriptive.

And now we’re beginning to work with partners, initially the people on which the Windows operating system runs, people like Compaq and Unisys and Hitachi and Fujitsu and IBM, and we’re sort of now beginning to make those reference architectures real and we call them prescriptive architecture guides.

And what we do with those is we actually bring them in house and we work jointly with the partners and we test the very specific configuration, and that includes cabling. That includes firewall hardware. That includes banks of Web front ends, as well as database servers that they actually run and will meet and be able to deliver service level agreements.

And then we also work with those companies to make sure that when we go out and sell or deploy one of those systems that those customers can have a single point of contact, joint queues so they can avoid sort of the finger pointing that can occur even between well-intentioned vendors who have a boundary sort of in your data center.

So we got a lot of learning from that and put it not just in technology, and the key point there, of course, is in that learning we discovered that while there were a number of things we could do to make Windows more reliable, more scalable, perform more strongly and predictably, be managed more easily, that, in fact, it was the whole kit and caboodle. It was actually all of the software, all of the hardware, all of the interconnect together that made a highly reliable system. And that’s beginning to pay off.

And these are just a few of some recent audited measurements of availability of business critical data center deployments — by data center I mean the general data center, not necessarily the data center product, but some of them running on Windows Data Center, and I’ll talk about that, but these are sort of authenticated, audited, and I’ll just take you through one of them. Free Markets is a good example. Free Markets has done over its history about almost $8 billion, probably $8 billion of trading right now. They conduct online auctions for goods and services for a number of vertical industries. And they’ve done about 5,000 auctions and they’ve saved — as a result of auction pricing they’ve saved the buyers of these goods and services probably about $1.5 billion a year. Now, if they don’t have a system, they don’t have a business. That’s all they do. They’re a completely electronic business. They have no other face than the system you see when you go into Free Markets.

So getting high reliability is essential for them. They went from a system that was around three 9s, about 99.9 percent available — sounds like a lot, but when this is your only business, any outage means you have no business — and they worked with us and they deployed — actually it turns out they’ve deployed a Windows 2000 data center, a 32-way system, although they don’t use it fully 32-way; they break it up into partitions and have four node clustering fail-over and so on, and they’ve now delivered a system with joint queues with us and with Unisys, and they have five 9s certified availability, and there’s a complete write up on Free Markets by Giga Associations, who actually did the auditing and wrote the economic justification.

They paid for the system in six months at an internal rate of return of around 252 percent, so a lot of people here I know are running Exchange and saying, “You know, I’m not sure what data center means to me, because Exchange really will only exploit four processes.” And this was a situation in which the processes were put together in clusters and other loads were running on there as well, and as a result I think you can start to see that the data center program starts to offer something, including joint queues and something that really justifies and supports the kind of server consolidation activities that a lot of you are being pushed to as well.

So we’re pretty happy about the progress there on reliability, but most people think about sort of one of the measures of the oomph of the server is its performance. And we sort of measure that along two axes. And the traditional measurement of how well a server performs is its so-called scale-up performance, and it usually is just how scalable is that piece of hardware, just if I take one sort of combination business load, just how much can I crank it up and how many users or how many transactions can I push through it, and we see benchmarks like the SAP benchmarks or the TPMC non-clustered benchmarks as sort of the standard industry measures out there.

And we’ve been pushing those benchmarks very, very hard. I’ll take you through where we are on them. And we’ve delivered a leading SAP benchmark. I’ll show you that in a second. And then we also announced a couple of weeks ago that we had entered the top ten for the first time of the scale-up TPCC benchmarks, and that’s a very exotic benchmark area. That’s an area that has been dominated hitherto by very, very expensive, proprietary RIS-based machines. There’s not been an industry standard Intel platform in there ever. And it’s basically proprietary hardware/software combination, highly tuned for that kind of environment, and as a result very expensive systems to build, very expensive systems to acquire, and very expensive systems to manage. And for the first time now we’ve got a top ten entry that’s running a Windows operating system on Intel standard hardware.

And while we just cracked the top ten, you haven’t begun to see the end of that yet. You can expect to see us move steadily up the ranks there as we commit more and more time to working jointly with hardware partners to build faster and more scalable machines and tuning the Windows system so that it can handle those high-end workloads, and as a result can be the system that scales all the way from your desktop through your department and way up into the high end and the high reaches of your data center.

Now, the other aspect of scaling is one that is achieved by a combination of software structure, software architecture and infrastructure. And we’re seeing scaling like that from companies like — and initially again it’s sort of a specialized crowd, but it’s the Yahoos and the eBays and, in fact, e-mail systems are characteristic of this kind of scaling too, which is by and large they don’t depend on one single process or one single transaction system, one single database; the load can be proportional more or less. I mean, the number of processes deployed can be more or less proportional to the number of users that access the system. So scale out measurements are important as well because they measure the performance of a different kind of application.

Now, the only point about bringing that up is while today scale up measurements and scale up applications, in fact, which run on single databases or run on single transactions services are most of our business, and SAP sort of looks like that, Siebel looks like that, JD Edwards, Bond, they all kind of look a little bit like that, and as a result are going to have a significant component of scale up dependency.

But over time, we’re starting to see the shape of applications begin to be separated out so that the units of work are separated out and we can begin to start seeing applications scale horizontally, and at that time a whole other set of economics will come into play in exactly what it costs to deploy, to acquire, and another set of challenges in how you manage a system like that.

Let me get into that in just a second, but first of all how have we been doing on scale up? This is actually the SAP benchmark, and this is the SAP services, sales and distribution benchmark, and what it is, is, it’s an attempt to produce a real life as opposed to an artificially constructed, a real life application benchmark. And what this is, is Windows performance against this benchmark across the horizontal axis is number of processes in the machine and vertically it’s the number of users that can be supported against this standard benchmark. And you see that it’s very, very close to linear. You’ll also see that on pure scale up that around 32 processors, which is the best we’ve got, or around 20,000 users and that results in it’s about 600 users per processor and it looks as though it scales pretty linearly, which is quite remarkable, and the best Sun has done in that space is on a 64-way machine, slightly lower results and it’s roughly half the number of users supported per server.

So we can see now that Intel machines, that industry standard priced hardware and software is starting to really encroach at the high end of the system and that really is having a straight economic effect.

We look at scale out — now, I want to sort of give a caution here, sort of don’t try this at home sort of thing. This is a measure of transaction rates on scaling out on the TPCC benchmarks. And I’ll take you through the numbers, but before I take you through them, I want you to realize that the numbers of transactions that we’re looking at here, which is around 160,000 to 500,000, but the 500,000 number, to make it sort of realistic, is the combined business of eBay and Amazon times 1,500. So we actually had to buy six months work of run of a manufacturers disk drive just to store the data to achieve this benchmark. And I’m going to make a point about the size of this later.

But just let’s look at the numbers for a moment. So the best you can do against that benchmark on scale out on a Sun machine gets you around 160,000 transactions on a 64-way processor. And on a Compaq system, a series of Compaq systems running 64 processors, we actually broke that and got it about 180,000, so marginally better on a number of much less expensive servers. And then using 192 processors we came close to tripling that result.

And the important number to look at is the dollars per transaction, basically just hardware costs and dollars per transaction. On the high end Sun machine, the real top end of the Sun machine, around $50 per transaction, and linearly scaling on the Intel platform we’re seeing about $20 a transaction.

And here’s the point, that while that particular workload is quite imaginary for most of us, that I believe what’s about to happen in the scale up world is what happened in memory. Memory prices, due to technology advances, plummeted rapidly in advance of demand. So while there’s always been demand for memory, there really weren’t the applications to soak it up. Suddenly, disk drive prices plummeted, storage prices plummeted and suddenly we saw a dramatic increase in technology and business around digital imaging and so on, online video, online pictures. We can be as greedy as we like almost these days with our disk space, considering how much it costs to get more, and as a result applications are arriving now sort of by the boatload to soak up that capacity, driving demand for the system.

And I think that it’s worth watching this, that at some point in the future, not too distant I believe, we’ll start to see the application topology change to start to soak up this kind of cheap processing capability. And when I get into .NET I think you’ll see that there’s synergy here, and that the way that applications will be constructed in the future as Web services that can be dispersed around and execute automatically that they will be the kind of applications, which will rush in to support incredibly low PC kind of economic levels of software capability, software and hardware capability.

So I talk a lot about scaling. It’s worthwhile bringing up that it’s actually I think a couple of days ago was Windows 2000 Data Center’s birthday. And we’ve had tremendous success with this product. It’s been an incredible learning machine, learning how to actually build the highest ends of scalability with all of our partners, with Compaq, with IBM and of course with Unisys, and we’ve started to see some real business deployments and real business value accrue from having sort of a jewel in the crown of the server line like this. And it’s becoming recognized as while it was looked at initially as why would I want a Windows system that big or able to handle those kinds of loads, as Windows becomes more prevalent in the data center and people start looking at server consolidation, cost of management, the ability to handle large amounts of memory and large databases and so on, and we’ll show you something that brings that to life I hope in just a second, we’re starting to see data center go out incredibly strongly.

So let me sort of try, if I can, to bring the value of these huge systems, not only huge in terms of the capacity they have to process but also in the amount of data and memory they can manage, and what I’d like to do is invite Bob Osborne — you can call him Orange Bob — he’s from Syracuse — and when later on you see him in the video demonstration, you’ll see why we call him Orange Bob. So Bob is going to kind of show us the data center demonstration to try and bring the scalability of these systems to life a little. And as you know, server demonstrations are usually a little dull; they’re kind of like watching paint dry, but Bob’s going to bring it to life a little bit. So, Bob, tell us a little bit about the system that you’re going to show us. What configuration do we have here?

BOB OSBORNE: Absolutely. Hello. How are you doing? And today what we’re actually going to be taking a look at is the Unisys ES-7000. We actually have a 32-processor system with three symmetric boxes connected to it. It’s running live back in Redmond, Washington and today we’re going to be terminal serving into this, so that you can actually see the kind of data and the girth and the size of this type of data that we have.

CLIFF REEVES: Okay, and so just exactly how much data is there that we’re going to be looking at?

BOB OSBORNE: Oh, goodness, we’re actually looking at — we went out to a consumer product group that specializes in gathering data, and we’ve actually gathered data from the United States out of mass merchandisers, out of grocery stores, out of any type of a drug store, over the last five years we’ve pumped it up to that size of data. So if anybody out here has ever purchased a gallon of milk at any type of a drug store, food market or if you’ve gone out to like a Kmart or a Wal-Mart, that’s a mass merchandiser, you’re in this database.

CLIFF REEVES: So it’s basically every mass market purchase, you know, at Kmart, Target, every drug store purchase and every food store purchase for how long?

BOB OSBORNE: For five years, five years worth of data.

CLIFF REEVES: So every purchase anybody has made is actually in this database?

BOB OSBORNE: As a matter of fact, it is, and that’s what makes it so large. We’re talking 1.2 terabytes of raw data.

CLIFF REEVES: There’s no specialty store purchase in there?

BOB OSBORNE: No, no, you’re safe. (Laughter.)

CLIFF REEVES: So that’s a hell of a lot of data. So just what kinds of information can we pull from consumer data that large?

BOB OSBORNE: Oh my gosh, you know, you can take a look at all the multi-dimensional versions. I mean, if you’re looking at cities, you’re looking at maybe you want to look at certain products, drill it down to how many products were sold in a single area. And let’s start with just a couple of good examples of really good linear data that matches one another, compare them together, and then we’ll take a look at a couple of myths and have a little bit of fun with the data.

CLIFF REEVES: Yeah, let’s look at some fantasy data here.

BOB OSBORNE: This is actual data that’s live right now in this database, so all these queries are going live against the box. So if you take a look at spaghetti and spaghetti sauce, the pasta that’s out there, people in this type of a database, they would never think about buying spaghetti pasta without buying some sort of spaghetti sauce.

CLIFF REEVES: So the sales are highly correlated?

BOB OSBORNE: Oh, absolutely.

Then there’s a very old myth out of DI is that correlating beer to diaper sales. The idea was that if you sent a father to the food mart, a real small store or whatnot, that if you put the beer next to the diapers when he went to go pick up the diapers, he would pick up a six pack. Although the data might kind of support this, we really don’t have enough information to deny or confirm.

CLIFF REEVES: I think if it was on a Saturday when the football games were on, you’d find a stronger correlation.

BOB OSBORNE: Oh, probably a high, high correlation.

Now, one of the final ones, to give you a little bit of fun out of this one, is that if you look at this type of data, cough syrup versus beer sales, during the middle of the summer when you’re buying the most beer, you’re also buying the lowest cold medicine.

CLIFF REEVES: Oh man, inversely correlated.

BOB OSBORNE: Oh yes.

CLIFF REEVES: So if you drink beer, you won’t get a cold.

BOB OSBORNE: I’m not quite sure if the data does this, but you might probably get into that.

CLIFF REEVES: I think it’s worth trying. (Laughter.)

BOB OSBORNE: Now, this is the basic size of what we have out of this, but let me give you an idea of what it really looks like. The whole time we’ve been doing this, this has all been live queries out of 1.2 terabyte worth of data. We are actually running this on this ES-7000 we’ve carved into three separate servers, so we actually have in the upper left hand corner on the bottom side of the screen we actually have the OLAP server, which is running 16 processors.

And if you notice, there are other screens that keep changing. That’s 1,500 separate styled queries happening randomly. We’ve got 10 users on there beating multi-dimensional cuts, so you’re not just looking at, “Gee, what’s this called.” No, we’re actually looking at, “I want to know the specific sales in Des Moines food and dollars of sales of milk juices out of this market.” And if you look down below, we’re actually working on this small two-processor terminal server taking off these queries, and we’re going against the T3 dataset, which is actually running at an 8-processor version size of this.

CLIFF REEVES: So it’s a lot of data, a lot of queries?

BOB OSBORNE: Oh, absolutely.

CLIFF REEVES: And the processor, is that the processor load over there on the right?

BOB OSBORNE: Actually, those are the OLAP processor times and the connection loads, yes, absolutely.

CLIFF REEVES: Okay, so the thing is barely breaking a sweat.

BOB OSBORNE: Oh, not even close. But what do you say we kick it up a notch? And we’ll grab a hold of this and drag it up, if we can, and as it starts to rebuild you actually start seeing the current connections loading up. You see them down in the bottom. And here we’re starting brand new queries coming into this set of boxes. And you notice the processors start to kick up on the OLAP queue?

CLIFF REEVES: Yep, seeing the load. Come on, kick it up.

BOB OSBORNE: Let’s kick it up, there you go, and bam! (Laughter.)

CLIFF REEVES: All right!

BOB OSBORNE: We got it back up.

CLIFF REEVES: Okay.

BOB OSBORNE: A hundred VI users out here, just pounding away at about 1,500 different styled connections, randomly being generated out there, multi-dimensional, cutting through 1.2 terabytes worth of data with sub-second response time. You talk about true scalability; that’s moving up. When you’re talking about this type of connection, this isn’t your standard TP connection.

CLIFF REEVES: Kick ass, dude, kick ass.

BOB OSBORNE: Oh, absolutely.

CLIFF REEVES: So, all right, thank you very much. That’s really brought it to life. I really appreciate your time.

BOB OSBORNE: Thank you.

CLIFF REEVES: And those of you who liked Bob’s enthusiasm get the opportunity to see him electronically in a few more minutes. Thanks again.

BOB OSBORNE: Thank you.

(Applause.)

CLIFF REEVES: All right, so an attempt to bring a little respect for the lowly server.

Okay, so let’s take a look at the next group of users. We sort of talked a lot about the IT professional, the kinds of things they value, and hit on I think a few of the highlights.

The next is if we take a look at the enterprise picture I showed before, and one attribute of it is that for a while the knowledge worker was essentially the power user, and of course they drove demand for — you know, in some cases departmental servers of their own, which were beginning to bring back in to the data center now and put them under control and manage the costs and so on and manage the services that they provided. And really the users primary demand is to their desktops and occasionally their mobile device and their desktop is their world and we serve them with rich sets of tools and high performance laptops and desktop machines that serve that.

But increasingly as we start to look at the way people really work in businesses, the way knowledge workers start to work, they’re no longer sitting in their offices in tight departments, in tight hierarchical organizational charts. They’re working across the organization. They’re working between organizations and they’re working in different time zones and they’re working in different companies and they’re working temporarily on projects and then moving on.

So the interesting new power group is the transient knowledge worker team. And the only thing they have in common is not necessarily share a PC, they certainly don’t share a PC, don’t share devices, don’t even share the same geography or organization; the only thing they can share when they want to do electronic work together is they can share a server environment.

And so we’re starting to see a rise in tools and technologies specifically geared at satisfying transient groups of workers, because they’re absolutely demanding high quality service, the kind they see on their desktop, set in tools that allow them to work together.

So I’ll talk about a couple of those. What we’ve been doing with Microsoft Office is recognizing that people work together more than just sending a document, annotating and moving it around and capturing the edits, which is a valuable feature, but how do people work together and share a document and put their work together and manage a schedule and keep privacy and search the information, and so SharePoint Team Services, which was developed with the server team at Microsoft that’s being really pushed and driven by the sort of knowledge about the way knowledge workers work from the Office team is now available as part of Office.

And in Windows .NET, the next version of the Windows servers early next year, SharePoint Team Services will actually be built into the server and it will be an automatic attribute of the Windows server that you can turn on and say what I’d like to do here is I’d like to have the ability for my power users to go in here and set up a shared collaborative space, a place where they can store documents but it’s somewhat richer than a file share. It will have customizable look and feel. It will manage to-do lists. It will manage calendars, schedules, and it will essentially be a temporary office space for teams of people who work together.

We have been using this internally at Microsoft as sort of an eat-your-own-dog-food kind of style, and we go through every year an annual business planning process and what happens is a few months of feverish activity gathering business intelligence, looking at our product plans, looking at business opportunities and so on, and essentially teams of people from all over the company end up working together to figure out where the synergies are and the finance people and the dev people and the business planning people and the people from Office working and the people from the servers and so on, and they form these tight knit little groups for about three months and then they vaporize and they get back to their jobs after the reviews are done to sort of execute the plans they’ve proposed.

And we’ve set up hundreds of these a month during that period. It’s just incredibly the right tool for the job, and these things get visited thousands of times. There’s terabytes of data produced into them and then they fade away.

So the ability to put these in the server, have native function for your knowledge workers and then be able to manage it from a server perspective is absolutely vital, because these things turn over at a relatively high speed.

So putting that kind of function in the server is really, really important as we start looking at this new power base.

Now, any time you talk about users and managing users, you need to talk about managing users effectively. And we’ve begun to see now a dramatic up tick in the usage of Active Directory. Active Directory now is capturing, I think according to Meta Group it’s capturing 50 percent of all new directory deployments across IT. And it’s got a share now that’s around 40 percent of installed directory services and the interesting thing is that directories themselves, largely driven by Active Directory deployments, are growing at a compound growth rate of around 140 percent.

The other thing we’ve now started to do we always built, designed Active Directory to handle the incredibly complex task of policy management and user management and access control within a corporation. That was exactly what it was geared for. And there are a number of incredibly complex issues that need to be addressed in that, and as a result a lot of planning and design goes in before you deploy an Active Directory.

We hadn’t seen it used as much, deployed as much until recently as an Internet directory, as essentially the white pages for the whole community, and now we’re starting to see deployments and Blue Cross Blue Shield extranet, one of the first companies to do this, but the numbers are swelling rapidly, which is they deployed Active Directory as primarily the directory for access control for Internet users, and they’re supporting right now about eight million users. The system has been certified for over 15 million and also was tested by Net Craft as the highest performance directory for the extranet, a position that iPlanet had been claiming for one time, a market that we hadn’t spent a lot of focus on, but now Active Directory is beginning to be very important.

When I talk about AD and start talking about AD and the Internet, it’s important that we start thinking about Passport services as well. And I’m going to loop back around to that when we start talking about the .NET system and talk about how Passport and Active Directory will work together in the enterprise.

So another element of knowledge workers is so we looked at that SharePoint thing, and like e-mail it’s a means of asynchronous communication. So if people are in different places at different times, they put things in one place like a mailbox or like a SharePoint Team Server and they know that later on they can go get it or someone else can pick it up and share it, and that’s the whole goal of these asynchronous development tools, because people aren’t available in the same place at the same time.

Now, very often though, given that we’re connected a hell of a lot more of the time than we ever have been in the past, connected sitting at our desk, at home on high-capacity lines or dialed in, and, of course, connected today with mobile devices, more and more people are in different places at the same time and their ability to recognize that other people are present and to communicate with them on a wide variety of media is really important. And we’ve been seeing this emerge for quite a while and we’ve seen real time communication in instant messaging, in voice and telephony, voice over IP, videoconferencing, different kinds of electronic real time collaboration have been emerging, but the problem is each one of them has emerged on a different architecture.

So, for example, for voice we’ve seen sort of the rise of the H3-23 protocols for data communication, for real time we’ve seen T120. Unfortunately, first of all they’re very different protocols. They came out of the telephony world and as a result they’re very processor intensive and they’re very complex to use and they require the mediation of a server environment to work effectively.

And as a result, we really haven’t seen the aggregation. We’ve seen little bits and pieces of technology, which begin to chip away at this need for professionals and consumers to find other professionals or other consumers to talk to.

And I think we’re now about to see quite a dramatic change in that environment where we’ll see a single set of technologies usable together and supported by a standard protocol and technology that can address the needs of a variety of devices. One of the problems with the system we’ve got today is some of those protocols are so compute intensive that they’re really unsuitable for deployment in low power devices or low processing capability, low memory devices like phones and PDAs. And as a result, they’re just not suitable to do instant messaging, beyond things like SMS in a phone or a PDA.

Now we’re seeing the emergence of the session initiation protocols, SIP and industry standard a number of vendors have stood up and said that they plan to support this. They haven’t delivered yet. And what we’re doing inside Microsoft is we’ll be delivering a real time collaboration server, RTC server, which is based entirely on the session initiation protocols. It will deal with multiple devices. It will unify all of the modes of communication: voice, video, instant messaging and access to services, presence — that is, the ability to recognize that someone is available on the network and what kind of device they’re on, and what kind of communications are appropriate for them, and the ability to invoke Web services like notification.

And I think rather than try to describe this to you, I’m going to show you a demonstration, and I’d like to invite Chris Cannon up on stage. You’ve been seeing a lot of Chris, so get used to him. And Chris is going to actually demonstrate for us the real time services that are today present in Windows XP that you can access through Messenger, but also that will be delivered, natively implemented in the next version of Windows Server, Windows .NET. So Chris?

CHRIS CANNON: Hey, Cliff.

CLIFF REEVES: Rock our world in a real time way.

CHRIS CANNON: Okay. So what you’re seeing here is Windows XP Professional, and this is probably nothing new to many people, being able to collaborate in real time with other folks, using simple communications like our online chat. And my good friend Mr. Osborne is backstage in the green room, and he should be back there on that box anyway. Hi, Robert. Let’s make sure he’s alive. So we’ll kick it up a notch as well.

CLIFF REEVES: Okay.

CHRIS CANNON: We can start real time collaboration with him using audio and video, which is something that previously you had to use a number of different programs or perhaps even go to a meeting room to use.

CLIFF REEVES: There he is. That’s why they call him Orange Bob.

CHRIS CANNON: Hey, Robert.

BOB OSBORNE: Hey, how are you doing?

CHRIS CANNON: Great. You’re looking good back there. So I wanted to ship you that graphic file you were looking for the other day, right?

BOB OSBORNE: Oh, believe me, I need it right now. Good.

CHRIS CANNON: Here’s our Windows .NET Server marketing material. And as you can see, in addition to real time collaboration communication, we can share files, we can do things like white boarding, remote assistance, a whole slough of really cool stuff in terms of working with other people.

CLIFF REEVES: And the point here is that while we show it to you in Messenger, the reason for putting it in the server is that really the important thing in this is the API, so that this function can be embedded in customer support applications and so on.

CHRIS CANNON: Exactly.

Thanks, Robert.

BOB OSBORNE: Thank you. Bye.

CLIFF REEVES: Now, can I get access to anything else?

CHRIS CANNON: You can get access to a number of different things.

CLIFF REEVES: I mean, is it just people to people or can we do any other kind of communication?

CHRIS CANNON: Well, up until now it always has been people to people. Today, I’d like to show you a way that we can access other technology using this same client. And what we have here is another contact in my list that’s actually a Web service sitting on a .NET enterprise server in Redmond and I’ve added this person to my list as a Stock Man and you can see I can say hi and it will come back and say hi.

What we really want to do is show how exciting it is to be able to grab information using this client as a business being able to let people know that it’s a bad day and they’re not supposed to come into work, or streaming your business presentations in a real time format.

CLIFF REEVES: Well, I have more than a passing interest in MSFT. See how they’re doing.

CHRIS CANNON: Okay. And as you can see —

CLIFF REEVES: Happy face or smiling face or sad face?

CHRIS CANNON: Yeah.

CLIFF REEVES: Happy face?

CHRIS CANNON: A smiley face is always a good face.

CLIFF REEVES: All right. So can I get access to any service?

CHRIS CANNON: You can get access to anything that you can wrap a .NET Web service around, which is extremely easy to do. We can embed this application in Web pages or other applications, because all the APIs are exposed.

You can even do more than just simple text as well, which I can show you now as well. If I initiate a camera conversation with my Stock Man, you’ll actually see that I will be able to stream MSNBC in real time as well.

CLIFF REEVES: Okay, I always wondered what Stock Man looked like.

CHRIS CANNON: Yeah.

STOCK MAN: — and then set a deadline for stronger doors to be installed —

CLIFF REEVES: Very cool.

CHRIS CANNON: So we’ve got some exciting things in the pipe that you can use the real time communication features in Windows XP for, in addition to communicating with your family, friends and coworkers.

So the key point here is for the first time we will be delivering integrated across a standard set of protocols something that is a technology that can allow us to deploy these services not just on fairly powerful PCs and across high bandwidth lines, but across low bandwidth lines to a large number of devices.

The second important point is that al of those services are exposed as APIs, and as a result, they can be embedded, so you can write the customer support application for your company that actually allows interactive communication with varying degrees with your customers.

It’s integrated fully with Web services so that if you want to introduce some other service that will deliver its mechanism or be able to deliver notifications or people will be able to interact with, it’s fully integrated into this environment, integrated into Messenger or integrated into your application.

So you’re starting to see a real sea change, so when you’re running Windows Server your knowledge workers will know it. It won’t be an anonymous server anymore; it’s something that they will bond with, that they’ll value the services from and that they’ll actually call for, and we’ll talk a little bit later about how that works.

Now, I’d like to shift gears just a little and start talking about developers, the third important community that we think we’ve got to do something powerful and dramatic for. And .NET is that, and Paul Flessner, for those of you who were here yesterday, probably talked a fair amount about this, but I’m going to go into a little bit more of the technical detail about exactly how .NET works and how you can think about it.

So you think about .NET as really kind of three things. The first is it’s a set of servers. We will be delivering .NET services and the tools on Windows and in the Windows products like BizTalk, and you’ll see a demonstration of that and how that plays in the .NET world in just a second.

And increasingly the .NET servers themselves, the products that Paul talked to you about yesterday will deliver the things they produce for you as Web services, so that it’s not just that you’ll go to the Web or you’ll go to Outlook and look at your mail or go to the Web and look at your mail in Web mail, but you’ll be able to federate the information that’s in there and embed it in other applications, and that’s just one example of kind of exposing the .NET servers as services, which are actually embeddable and usable within other contexts. And you can imagine, for example, embedding certain mail or calendaring applications maybe in the Messenger client to actually start servicing some of those services within there.

The third thing we think about is the devices that will be supported. The key thing about Web services is that they expose the data they produce in automatic ways that gives you two degrees of freedom. The first is they can be delivered so that they can be federated from anywhere into any other application, so you don’t build a monolithic application, you actually go and take existing services or build new services and you combine them together occasionally in the user’s face. They’re built and they port together. Think of it as just in time kind of management for developers.

The second thing that is fundamental in the design of .NET services is they use XML completely. I’ll talk a little bit more about XML in just a second, but one of the key points is it’s really incredibly powerful delivering information and then allowing the device to determine exactly how to render the information. It doesn’t say, “Here’s a Web page. Deal with it, and if you can’t deal with it, sort of do some hacking around.” It says, “Here’s the data you need, and you format it and display it appropriately for the capacity and the visual real estate you have and the user interaction that’s possible on the device you’ve got.” So it innately deals with some of the complexity of building applications once and then targeting them to multiple device audiences.

So let’s just take a little trip through the technology that underpins .NET. And many of you know this, but it bears repeating to just sort of put the things in context. There are three standards that are really important in the .NET premise. The first is XML and XML actually has many layers of value, but at the bottom layer it’s a simple way of expressing data structures and communicating them without having to agree on the absolute finite details that we use. If I want to, say, use EDI to pass you an inventory record, I would have to agree with you bit ordering on the wire, actual field lengths and placement. Every single piece of information about this structure would have to have been agreed to beforehand before you could do anything intelligent with that data.

While it’s still important that if I want to hand Chris electronically an invoice we agree that it’s got things like name, price, order quantity, delivery, address, et cetera on there, handing it to him in XML is far easier to interpret, far easier for us to set up an agreement, we have to agree on far fewer things necessary to do that. So for one thing, it’s just a much simpler way of passing incredibly rich data.

The other thing is it can actually be used to describe data. It can say here’s the kind of thing you’re going to get and it can describe an invoice, and say an invoice is a thing that looks like this.

So it’s very, very powerful for describing the data itself and providing data about data, and we’ll see why that’s important in just a second; so simple object access protocol.

So if XML carries the nouns, then SOAP is the verb of communication. SOAP is the set of standards, which allows me to say, “Create an invoice, give me a price, buy three of those.” It’s the technology and the protocols that allow us to invoke operations on objects or services across the Net, and it’s very standard, it’s open, it’s been designed by companies like Microsoft, Commerce One, IBM, Arriba, a huge roster of companies got together to define the SOAP standards and to make sure that it was managed completely open, but also developed in real life and moved forward really fast.

And last is UDDI, and UDDI is a standard that says, “You know, if, in fact, it’s true that Web services emerge and people start writing interesting services like, ‘Hey, I can sell you this product at this price and this quantity with these delivery dates,’ and wouldn’t it be interesting if people who were interested in those services could discover them, and what’s the mechanism for people to get together to aggregate their services and then publish them yellow pages style.” So UDDI is a technology that allows us to say, “I’ve got data, I’ve got information, I’ve got services I can fulfill. I can tell you exactly using XML what the SOAP commands are that you can use to access this data, so I can start thinking about dynamic usage across the Web.” UDDI says, “How do you find them.”

And what we’re going to do in the next couple of demonstrations is to take you through each of those technologies in action. And the key thing is while we blasted through some acronyms here and talked about standards and talked about protocols and so on, the key point here is the fantastic thing about .NET is while you can look at it, and any of you developers in the room will look at distributed Web services and sort of component by software and you say, “You know, that’s the way software should be, you know, really dynamically bound, loosely put together, federated; as long as it performs well and I can find this secure, I’m really happy, because that’s exactly the structure that applications should take.”

But doesn’t it mean I have to rewrite everything I’ve got? The fundamental premise is absolutely no. It means it will be easier to write new things, but it’s also incredibly powerful for you to be able to take existing applications and turn them into Web services and also discover and invoke the kind of Web services that are already out there today. And I think maybe Chris’ next demonstration will take us through that. So why don’t you set up, Chris, and tell us exactly what you’re going to do here.

CHRIS CANNON: Okay. One of the most common problems that business faces is how they integrate all of their various services and parts of their business, as well as interacting with their customers, their suppliers, their other vendors that they need to deal with to be able to get their business done on a daily basis. And scaling your business to be able to accommodate that, especially in a .NET Internet based world gets extremely difficult. And we have a couple of key components that live on our .NET Enterprise Server products that make this much, much easier.

So the scenario that we have to show you today is a simple business process integration demo that is based on a Web site called CycleCentral.com. As you can see, they have an e-business presence using Commerce Server 2000 running on IIS on the Windows 2000 platform.

They have a supplier that may be in a different part of the world that’s using a couple of external third party products to manage their business, one of those being the JD Edwards Enterprise Resource Planning or ERP software package, another one being the McHugh Warehouse Management Suite.

And they’ve also developed some home grown applications that allow them to use wireless devices actually in their manufacturing facility to be able to speed up the order fulfillment process.

So the first part in making all of this happen is to have an agent that’s able to do the translation and carry the data from Cycle Central over to the supplier, from the supplier’s ERP system down to the manufacturing site in the warehouse. And that product is BizTalk Server and XML being the language of the .NET enterprise.

So the first thing I wanted to show you is how we actually make this happen. And a product that ships with BizTalk Server is the Orchestration designer. You’ll see that it looks an awful lot like Visio. In fact, it is Visio with a few extra bells and whistles in it.

The simplest fact is if you can draw out how your business operates you can use this.

CLIFF REEVES: So just to pain the picture, I mean, the point you’re making here is that the JD Edwards system, the McHugh warehousing system and this application essentially have never seen each other before, and we can imagine sort of BizTalk Orchestration here and Commerce Server as so I’ve got JD Edwards back there, out there in sort of cyberspace, I’ve got McHugh out there in cyberspace, and I want to build this catalogue of products that links them together.

So over on the left side I’ve got all the workflow that can link them together and on the right side in the green there I’ve got the glue that I can use to find the different services?

CHRIS CANNON: Exactly.

CLIFF REEVES: And in the middle is a picture of how it all works.

CHRIS CANNON: You bet. This simply allows you to map actual technology to your business process.

We’ll be able to see this actual business process diagram fulfilled as we go through the demo, so it will help people see where we’re at.

CLIFF REEVES: So I can see it as it runs?

CHRIS CANNON: Exactly.

So let’s go ahead and do some e-commerce. Here is Cycle Central, our Web site. I’m going to go ahead and log on. You’ll see that I’ve been here before; it remembered me. It’s even extending me a discount today. So let’s buy some things, shall we?

CLIFF REEVES: Yeah, let’s do it. So you’ve done all these demos, so we should buy you a gift, I think.

So now I have a bike and a helmet. I’m going to go ahead and check out while I’m lucky.

One of the great parts about Commerce Server is the ability of having personalization and creating profiles, and you’ll see that it remembers who I am. It remembers my shipping address. It also remembers my billing information. I’m going to pick standard shipping and try and cut the cost down a little bit.

CLIFF REEVES: Yeah, I appreciate that. Actually, I’d rather you rode it home.

CHRIS CANNON: And we’ll continue to check out.

So at this point when I submit my order, if we were to look back at our Orchestration designer as we built our business process, when I submit this order, BizTalk Server will take the information out of Commerce and it will send it off in XML to the JD Edwards system, which will then provision the order down to the warehouse facility.

And at the same time, being a consumer, I’d like to know that they got my order. And they’ve also built that into their mechanism as well. So as this process occurs, and I’ll go ahead and submit this and we’ll shortly see a diagram pop up here that will show us where we’re at during the business process. This is all in real time on live systems.

CLIFF REEVES: So this is BizTalk runtime there.

CHRIS CANNON: Exactly.

So we’re sending the PO via XML to the ERP system, and once the ERP system receives that purchase order it’s going to do a couple of things. First off it’s going to send me the customer an e-mail — I put my e-mail address in the profile — let me know that my order has been received.

We’ve also built a .NET Web service that’s using a telephony COM object that will actually place a phone call, and I put the phone number for here on stage into my profile before we ran this this morning, so we should be getting a phone call from our supplier here shortly, and we’ll also get an update from Commerce Server.

So we’ll go ahead and answer the phone and see what happens.

COMPUTER VOICE: Your ordering service. To check the status of an existing order, press one. This is a message from Cycle Central to inform you that order 30275 has been received and will be shipped within three working days.

CLIFF REEVES: Very cool.

CHRIS CANNON: So there’s our telephony piece as well.

One of the pieces you saw that we needed to do there was update the Commerce Server site, so we’re back at Cycle Central. If we click on our order number here and look at the order history, you can see that we have been acknowledged, and we need to make sure that this is all happening seamlessly, right? So let’s change gears and move over to the actual supplier side of the house now and we will be an employee at the supplier who’s actually logged in to the JD Edwards One World Enterprise System. And we’ll go ahead and query for our order number 30275, and we’ll see there our orders for the bicycle and the helmet; once again, no human intervention. No one had to actually call this order into the supplier. It was all done via XML and BizTalk Server.

So the next step in our manufacturing process is to send the purchase order to the warehouse for fulfillment, and you can see in our flowchart here we’re actually at a stop waiting for the order to be shipped.

So let’s go ahead and log into the Warehouse Management Suite and we’ll do a query for that same number; completely different system and you’ll see that because of our Orchestration and BizTalk we were able to get this information funneled from the commerce site to the ERP site and to the warehouse management site with no problems at all.

CLIFF REEVES: Very incredible.

CHRIS CANNON: So there’s our order.

The next thing that happens in this workflow is a person has to go to the warehouse, has to pick this thing off the shelf, put it in a box and ship it out to me. And as we mentioned earlier, they are using a wireless wide area network at their facility and I’ve got this Pocket PC here. I’ll bring up the display for it, so you can see that it’s here and it’s real. They have partnered with a company called B Squared to actually build a picking application that allows the people in their warehouse to be able to receive in real time information that they need to go pull something off the shelf and ship it out.

CLIFF REEVES: Is this the part where I get to do something?

CHRIS CANNON: I think you could help me out here.

CLIFF REEVES: I appreciate that.

CHRIS CANNON: Do you feel like being a warehouseman?

CLIFF REEVES: It will be a step up, but I’ll work at it. All right, so I’ll pretend the warehouse is over here.

CHRIS CANNON: So I’m going to minimize my warehouse management window and move it over here so you can see what happens. When I allocate the order, it will generate yet another XML query through the BizTalk Server, using Microsoft Message Queuing send the message to the Pocket PC and he can do his work.

CLIFF REEVES: Okay, yep, it’s here. So I’ve got the order right here.

CHRIS CANNON: You’ve got the order.

CLIFF REEVES: I pick the stuff on the shelf, put it on the pallet, mark it picked, say I’ve done it, okay, acknowledge it. There’s another one, mark, acknowledge that, done. All right, so I’ve done the picking off the thing, got the notification here, sent the replies back, all that stuff running into the warehousing system and then back to BizTalk and et cetera, right?

CHRIS CANNON: Yes, you’re hired.

CLIFF REEVES: Thank you very much. Thank you for giving me a challenging role.

CHRIS CANNON: So if we refresh our screen here, we’ll see that our order has been picked and it’s now ready to ship.

So let’s jump back to our business process here, so we can see what’s going to happen as we finish this ordering cycle.

Once we actually ship the order from the Warehouse Management Suite, BizTalk is going to generate a number of acknowledgments again to show that we’ve actively shipped the notification. It’s going to update the Commerce Server site, so I could go back to the Web site and see that my order was shipped. It’s going to update the ERP system. It’s going to send me yet another e-mail.

The other cool thing that we’re going to get today is we’re using the code name “Hailstorm” .NET Web service, so we’ll actually receive an instant message, because as the customer, I’m online, I’ve got my Messenger client up and I’ll be able to see in real time that my order is on its way to me.

So we’ll minimize this and go ahead and ship the order and here’s the shipping notification that’s going to be sent to me. And if you’ll look down here in the right hand corner, here’s my instant messenger telling me that order number 30275 from Cycle Central has been shipped and will arrive in three days.

CLIFF REEVES: What happens if you click that?

CHRIS CANNON: If we click on this, it will take me back to the Cycle Central Web site where I’m able to see that my order has been shipped, and that’s how fast all of this has occurred using XML over the Internet with BizTalk being kind of the traffic agent between all of this.

CLIFF REEVES: So with BizTalk as the orchestration mechanism, all you had to do to make BizTalk be able to reach all of those systems was expose their interfaces in XML, including the Web services, including JD Edwards with simple XML interfaces on existing rich function and sort of, Bob’s your uncle, you get the ability to integrate these things smoothly.

CHRIS CANNON: Sure.

CLIFF REEVES: Very cool.

CHRIS CANNON: Let’s make sure everything worked the way that it was supposed to, though. I was supposed to get a couple of e-mails in Outlook as well, so we’ll open this guy up and we should see two messages in there. Yep, there’s my shipping notice and my confirmation for the order. So every piece of that business flow process worked. We were able to confirm it visually here.

CLIFF REEVES: All right, outstanding. So I guess the key point is the .NET stuff is powerful. You can use the services that are out there, but it doesn’t require a change in religion or rewriting all of your applications to begin to adopt it incrementally and actually start to put some power in those kinds of new business applications.

CHRIS CANNON: Exactly.

CLIFF REEVES: And at least one sort of position on development is it’s all very well to look at new tools, revolutionary new approaches, but the fact of the matter is those exciting applications you’ll be running next year are likely to be 90 percent or 99 percent written already, because they’re going to be layers or adaptations of the deep ERP, CRM, SCM and so on applications that you’ve already made massive investments in. And as a result, any technology, which says, well torch that and start building anew is crazy. And the ability to deliver incrementally on the value you already have is absolutely fundamental, but change in exciting and new, powerful ways to the development is fundamental too.

So we went through showing you sort of the incremental side of .NET and the use of BizTalk and Commerce Server. Now let’s take a look at the services more in general.

Now, one of the ones that’s worth looking at is the service that Microsoft offers called the Passport services. And Passport is essentially an authentication mechanism. In fact, Passport is one of the most heavily visited sites on the Internet, but almost nobody every types in www.passport.com, though, by the way, you can get there, but anytime you go to Hotmail or to MSN and you’re authenticated for any of the services they offer, actually the Passport service is doing the authentication on their behalf.

And in developing that authentication mechanism we sort of learned what it takes to do an authentication service that can be used by a variety of Web sites. The whole notion of Passport is something that was Microsoft’s learning in delivering its own online services and looked to us like a powerful and functional technology, which we could make available.

So what we’re going to do now is most of you, I’m sure many of you have got Passport. You’ve installed Windows XP or you’ve downloaded Messenger. And for one reason or another you’ve been prompted to ask if you want a Passport. And you’ve seen the authentication service in action for any site, which requires Passport authentication.

Now what I want to do is ask Chris to show us, using Passport with Windows authentication, integrated with Active Directory, which is something that we’re going to be delivering again in the next version of Windows, Windows.NET. So, Chris, why don’t you show us how this works?

CHRIS CANNON: Okay. One of the exciting things, as Cliff said, is we’ll soon have the ability to scale outward and grow in terms of our management burden exponentially. This is a great thing because you may have external people outside of your business, you may have friends and colleagues, you may have any number of people that need access to data in a given spot. And if they’re not part of your organization or not part of your local domain or intranet, sometimes creating accounts for them, resetting passwords and all of this sort of authentication stuff can be burdensome at best.

CLIFF REEVES: So you kind of want that blend of high-level consumer access, but all the privacy and control that you get in a Windows environment.

CHRIS CANNON: Exactly.

So at the simplest level we’ve built a little Web site that will help us demonstrate how simple it is to be able to enable this Passport authentication method.

We have in Seattle a private investigation firm called CyberInvestigations.net and their Web site exposes a number of different things to us. We can see floor plans for our buildings. We’ve got surveillance photos here. Let’s see if there’s anything interesting. I see a couple of things here. You know, that guy in the Porsche looks an awful lot like you, Cliff.

CLIFF REEVES: It’s rented. (Laughter.)

CHRIS CANNON: So I take it this is probably not something you would want everyone in the world to be able to see on the Internet today.

CLIFF REEVES: I’d appreciate it, no.

CHRIS CANNON: Let’s fix that. We’ll go ahead and shut the Web site down. We’re going to move over here to my Web server that’s hosting our Web site and here’s the Internet Information Services manager. We’re able to look at the properties of our default Web site. We’ll jump up to the directory security tab and by editing the properties here we’re able to see that we have the default, which is anonymous, allow everyone in here.

So let’s uncheck that, move down here to the new Passport authentication tab. We’ll enable that, accept the defaults, apply those changes and we’re done.

CLIFF REEVES: Okay, so the previous access to the site was completely unlimited.

CHRIS CANNON: Completely unlimited.

CLIFF REEVES: Now you’ve said only people that can get into the site are people with a Passport. You’ve basically handed over your authentication control for your site to the Passport service.

CHRIS CANNON: So now we’ve narrowed it down to a few million users.

CLIFF REEVES: Right, about 160 million so far and about 10 million more each month.

CHRIS CANNON: So that’s still probably a couple more people than you would like watching those surveillance photos, right?

CLIFF REEVES: Probably.

CHRIS CANNON: So I’m going to jump into the Web root directory here and the surveillance JPEG file is actually where those photos were located, so we’ll look at the security for that and see that by default everyone has access to see that. We’re going to go ahead and remove that and apply that security.

And we’ll change back over to our browser machine, fire up the Web site one more time and the first time that this comes up after you’ve enabled the Passport authentication method back on the back-end server, it will take a second or two for the client to be able to go out to Passport, grab the information that it needs and get redirected to the Passport log on screen.

So as we’re waiting for this to happen, once again the beauty of live, real time Internet, we can talk a little bit about what we do after we’ve got Passport accounts in the Active Directory.

I’ve created an account at a non-Microsoft ISP just so you can see that Passport is not tied directly to using us.

CLIFF REEVES: Any e-mail address is valid.

CHRIS CANNON: Any e-mail address is valid. I’ve got an account with Yahoo that I built, and if I can remember the password for this one, we’ll go ahead and sign in, and we’ll see that now we’ve been authenticated via Passport. We’re able to get back to our Web site. And if we try and browse the surveillance photos, we get kicked out. We’re not authorized to do this.

Now let’s take it a step further. What’s really exciting about this and what I think really paints the picture nicely for what a useful tool this will be is the ability to extract Passport information and populate it into the Active Directory so you can act against it just like you would any local user you built.

Now that’s not to say we want to suck the entire Passport database into your Active Directory, but we have the ability to do those sorts of things if we need to, so that we can apply very specific authentication security to it.

CLIFF REEVES: So could you take just one of those Passport users and then give them access to things, but not the rest?

CHRIS CANNON: Exactly. Now I can do that.

For our demo I actually built a piece of script that allowed anyone who accessed the Web site with Passport to be automatically added to the Active Directory, probably not something I would recommend, but something you can do if you feel threatened.

CLIFF REEVES: You could filter it.

CHRIS CANNON: Let’s take a look at the properties of our JPEG here and go back into the security tab and we’re going to add my Passport user, and I can’t remember the full name of this, but I know it starts with Pass.

CLIFF REEVES: There it is.

CHRIS CANNON: There I am. You can see it’s my Passport demo at Yahoo accounts. We’ll go ahead and add that in, say Okay, and now we can apply security any way that we would, provided it was a local user.

CLIFF REEVES: So you can give this user access to the JPEG now.

CHRIS CANNON: Exactly.

So let’s jump back over to our Web site and we’ll try this out and make sure that it works; sign in via Passport again. Obviously it’s a lot faster the second time. And we check our surveillance photos once again and now we’re able to see that we’ve actually granted granular security using Passport, no longer you’re required to persist user accounts and passwords. Your administrators love you.

CLIFF REEVES: So what we’ve got is we’ve started with no control over access, Passport control only, which is still a large number of people, cut some off, then took mix and match what the Passport users and the internal users of the company in AD had, so really nice blending and federation of Passport and Active Directory. And that will be native again in Windows.NET Server when we ship it next year.

So thanks very much, Chris. Thanks a lot.

(Applause.)

CHRIS CANNON: Thank you.

CLIFF REEVES: We’re getting close to the end. In fact, I want to wind this up fairly quickly. And one of the reasons is a few weeks ago when we were getting the final preparations for these sessions, somebody said to me, “So what’s going to be difference between your presentation and Flessner’s?” And I said, “Well, it’s pretty much going to be the same difference as there is between Flessner and me. Mine will be shorter and funnier.”

So we’re going to wrap up now and sort of just kind of nail down the points we’ve made.

So a key point here is that there really is a change going on in the way we think about Windows servers and the audience that they use, and the kinds of effects that trends in the industry, demands of users and trends in technology are going to make. The focus for knowledge workers, we really believe that while it is absolutely fundamental for Microsoft to deliver the highest scale up kinds of servers that can run in your data center, that compete and beat the existing incumbents at that exotic high end of service capability, and we’ve done that and we’re there, that there will, in fact, be a sea change over time as we start to see the value of servers that support a scale up architecture and environment, and I think that Windows.NET will drive that quite significantly.

The second point is that knowledge workers become the brand new power group. And, in fact, groups of knowledge workers are sort of transient teams, and so what we have to do is deliver technology that gives them with no latency, no delay, the instant gratification and the instant tools they need to get together and work productively, asynchronously and in real time on the Web, and then take those technologies and have the ability to build them into our business applications. So the knowledge worker transient group as the new power group is sort of the next trend, which we’ll see I think driving demand for Windows servers.

And last but by no means least, in fact maybe in the long term the most profound is the shift away from monolithic applications to applications that consist of servers and services, tools, user experiences, all linked dynamically, which if you ask any programmer is actually the right shape of software, and, in fact, I really think that Web services are really the technology that deserves the term “free software” because it’s free to be developed, deploy and consumed exactly by whose fit to produce it and who needs it most and who needs it and where it’s needed. So it really is I think a new definition of free software.

So what do I think is important? I really think you guys, that you’re continuing to look at Windows 2000 deployment. You should look hard again at data center. It actually offers tremendous capabilities and not just exploiting full SMP 32-way, but four-node fail-over clusters and it’s on very powerful system that’s delivering some really kick ass benchmarks in the industry, and has the advantages of preset configuration and work and delivery in joint queues with our partners.

The next is think about those knowledge workers. E-mail was absolutely a fundamental and important delivery for those guys, and Exchange has delivered strongly on that, but there is a new world of transient power groups and they need to be served, so you should be looking at SharePoint Team Server, which is available now as part of Office, and thinking about deploying that and recognizing that the next version of Windows, that and the Real Time Collaboration Server will be embedded natively.

And last let me repeat what Paul said. I think it is really incredibly important to start thinking about developing Web services, but that doesn’t mean coding everything from scratch. It means looking at the existing valuable services you’ve got and exposing them in ways that allows them to be automated and accessed by tools like BizTalk in a .NET environment. And of course that’s exactly what we’re doing with the .NET Enterprise Servers like Exchange.

So once again thank you very much for attending MEC. Thank you especially for attending this session. I appreciate all your time and attention and have a great conference. Thank you.