My Technobabblehttps://blogs.msdn.microsoft.com/gblock
Glenn Block - .NETFX
Another ALT.NET guy at MicrosoftFri, 09 Nov 2012 09:23:32 +0000en-UShourly1Updates to our Azure Cross Platform toolshttp://feedproxy.google.com/~r/MyTechnobabble/~3/-Y1dqqdTXHk/
Fri, 09 Nov 2012 09:23:32 +0000https://blogs.msdn.microsoft.com/gblock/2012/11/09/updates-to-our-azure-cross-platform-tools/In my last post I talked about the new goodies we added to Powershell. Well I meant to get a follow up post to tell you about many of the same features which we also added to cross plat “azure” command line tool, but fortunately Larry beat me to it! The list includes site settings, github deployment and my favorite much needed feature that I have been asking for for so long I was going to implement it myself!, site restart!,

A few weeks ago we held an event in China called HuJS. This was an event that Troy Howard and I dreamed up when I landed in China in July (Yes I was living in China for 3 months, but that is for another post!). With the the help of a great team and with folks like Mikeal Rogers and Chris Williams advising us from the sidelines, we were able to make it happen. The event was a wonderful experience for me on multiple levels. We exchanged knowledge, we forged new bonds with the community, and we raised more international awareness about what JavaScript developers are doing in China.

For folks that travelled to China it was also an amazing cultural experience as they got to soak up the local sites and the local culture. We took the speakers to see some of the most amazing places in the world, like the Pearl Tower, and they got to mingle with the awesome folks at Xinchejian, the local hacker space.

Things that went wrong

Every conf has it’s high points and it’s low points. Our lowest point was definitely the internet access at the venue. They were not prepared for the number of connections and well it just went down on the second day. And lucky for me, it happened right before my talk while I was prepping. For cloud talks that’s pretty murderous.

You can read the experiences of others who have attended in the posts below.

The talks

The first part of the event was ran in a JSConf event style with a single track of back-to-back sessions covering two days. We were fortunate to have a really dynamic mix of speakers both local and abroad covering topics like node.js in the real world, async, webkit, JS frameworks and cloud.

Travel issues

Two folks however were not able to make it, though they really tried, Guillermo Roach of socket.io fame and Vicent Marti of github. Both tried to improvise and find a way to deliver a talk remotely. For Vicent we actually were able to pull this off (not without hitches) however unfortunately due to internet issues, Guillermo was not able to join us at all :-(. I was really excited to have him joining us so I was completely bummed that everything fell through.

Guillermo did however record a talk for us on building high performance applications using HTTP, WebSocket and SPDY. Watching it makes me wish even more that he had been present to deliver it as it has great technical content and real world experiences around scaling node!

Surprise speaker

In Guillermo’s place, we were fortunate to have Domenic Denicola of Barnes and Noble step in at the last minute and give a great talk on client side JavaScript!

My Rundown of sessions

Unfortunately I did not get to attend all sessions (goes with the territory of running an event), but here are notes on the ones I did attend, which ironically was many more sessions than I usually attend for events I go to.

In this talk which kicked off the event Troy Howard from App Fog, tried to rally Chinese developers to join the global OSS community. He talked about the Apache project (which he is a member of) and encouraged local developers to use more FOSS and get involved and to committing to Apache projects as well as other OSS projects. He told them “I want to see your names on the commit logs.” It was an inspirational talk and the community really enjoyed it.

Steven gave an amazing talk. He built a taxi reservation app initially using Knockout.js and a server backend. He then migrated the application to the cloud using Azure Mobile Services. From the feedback people loved it. This particular quote was from one attendee. “It's quite possible that Microsoft may pull off a successful transition to open source, open web and cloud!”

“NODE.JS POT HOLES” by Yuan Feng.

Yuan Feng works at TaoBao, the EBay of China where they use node.js as part of their backend. His talk was lively and entertaining focusing on common but hidden pitfalls with node. Although his talk was in English, Yuan Feng went out of this way to provide English translation on each of his slides. The talk was interesting because Yuan Fang showed actual bugs they had encountered in their systems.

Vicent’s talk was on hubot, a bot authored in node.js which handles a lot of automated infrastructure tasks at Github. Hubot initially supported only IRC, but they added support for a ton of other protocols and it is extensible (Jabbr anyone). Hubot receives commands via chat which then cause it to kick off scripts which are written in coffeescript. The scripts will report back results which will get dumped to the channel. Some tasks are callable by anyone on the channel while others require authorization. Authorization is based on who the user actually is. The model is pretty simple and relies on in the IRC case that the user actually has a registered IRC name, thus they can be sure the person is who they say he is. I did ask myself a few times, why we don’t use this kind of automation internally ;-)The best part of Vicent’s talk were his slides which were amazingly professional and artistic in a comic book sort of way.

The major hitch of his talk was that he was unable to join us live. He did however reluctantly agree to try to deliver the session remotely. Internet in Shanghai is NOT good, so this was a pretty brave undertaking. Vicent recorded his video ahead of time and sent it to me. We then played the video during his talk with him watching the screen on skype and live video showing his face. To keep it interactive, periodically I would pause the video and ask him a question, or he would give some more clarity. We hit a few technical hitches due to bandwidth issues (as predicted), but the video quality was fantastic thanks to it being pre-recorded. After his talk he had a few minutes for live Q&A with the audience which they really appreciated.

”PORTING NODE: THE JOURNEY FROM LAKE JAVASCRIPT TO THE STRAIT OF LUA” by Tim Caswell.

I found this to be a fascinating. Tim kept a really good pace with this talk for non-english speakers. He described how he set out to write Luvit, a Lua port of node on top of libuv. The reason he did this is because Lua runs in a lot of embedded devices. One huge challenge for him was he did not know C++ or Lua when he started the project. Tim described in detail the process of learning Libuv and Lua and then mapping out node concepts to Lua, and adjustments he had to make based on language constraints. With the help of a community that formed around the project he managed to get pretty darn close.

Wind.js is a pretty interesting async framework. It is yet another example of folks dying to have async/await type syntax in JavaScript. Like Streamline, Wind.js does rewriting, however unlike Streamline it does this all at runtime and doesn’t require running anything other than node. The downside is the syntax is pretty verbose and it deeply pollutes your code. In this talk Jeffrey talked about Wind.js and how to debug. As he does all the rewriting at runtime he’s able to capture all the information in memory to map back to the original lines of code, something that Streamine does not do. I am not a fan of the Wind.js approach because of how it “infects” the code, but I do see how the runtime rewriting has significant benefits over static generation which Streamline uses. Aside from that it was great to see some really interesting node hacking happening in mainland China.

“Run with node.js for fun and profit” by Charlie Robbins. This was a variant of an earlier talk I’ve seen Charlie give which basically talks about the evolution of node.js, and how it is exploding. He used the exponents of 10 approach starting at 10 to the power of 1 and working his way up at each level talking about how it relates to node starting with 10 to the 1st as the number of core committers and working his way up including plenty of nodejitsu along the way. One thing Charlie did that really resonated with the crowd is he pulled several Chinese words into his slides. It was a fun talk. Charlie has celebrity status here however so people loved the talk regardless of what he said

The main them of James’s talk was about not building monolithic node applications. This has been a passion area for James as he has authored several diff modules to address the problem. The crux of his talk was encouraging folks to break as much code as possible into npm modules and to use modules like fleet and seaport to allow apps to easily scale and to distribute workloads across many machines and processes. Fleet is a git integrated mechanism for continuous deployment and for process management. Seaport is a centralized registry for network based services. James argued with Seaport you can have components in your system sitting side by side but using different versions of the same service. He said both modules have been hugely valuable to running the infrastructure for Browserling.

“The Secrets of Node revealed on Windows and Azure” by Glenn Block.

My talk was focused on educating and eradicating misconceptions about Node.js as it relates to Microsoft, Windows and Azure. The talk was a bit campy as I used one of those secrets revealed documentaries type of theme. I started off first talking a bit about myself and how I ended up in Shanghai and putting on the event. Then I talked about node.js, how it initially did not work on Windows and how Microsoft jumped in to help node.js have a first class experience on Windows. Next I showed npm and how that works on Windows. Next I moved to the cloud talking about how node.js works in Azure and finally how you can deploy to it from any platform or even deploy Linux VMs. As part of the talk I did a bunch of demos of node, npm and finally Azure. The Azure demo hit some hitches due to some network issues. I worked around them by using the Azure portal to deploy. Overall I think the audience was shocked and surprised to see node working on Windows. After my talk I had a guy come up to me who runs a local startup and say “I used to hate Microsoft because they didn’t care about OSS, that changed when I saw you today”. That made me pretty happy J

“NEW APP('WINDOWS 8');” by Aiken Qi.

Aiken Qi gave a nice overview of Windows 8 application development and the JavaScript programming model. He demoed a Windows 8 app live and also walked through the code. I spoke to several folks after the talk and they were really excited to see the way Microsoft has embraced JavaScript for client development.

Panels

On both days we had a panel of the speakers for that day. That was really interesting because there was a mix of English an Mandarin speakers sitting side by side with questions coming from the audience for both and in two different languages J. The outcomes of the discussions were not so important, though there were some interesting ideas percolated. The real I felt was in the international.

Day 1

On the first day the discussion was about frameworks. The first part was a frameworks vs libraries discussion for node of which most folks were on the side that you should prefer small libraries over frameworks. The second discussion was about javascript templating engines and which style of engines the speakers preferred, i.e. string based or DOM based. Most erred on the side of dom based, though Nodejitsu has an interesting model which plates which runs on the server and uses a string based representation of the DOM. Steve gave a really good explanation on the different between the different approaches which I really appreciated. Another topic that was covered was around the community in China and what folks in the panel thing China developers should do to be more involved. The basic sentiment was for folks to get more involved with OSS projects and to be active in the mailing lists.

I asked Charlie Robbins about native modules in node as it relates to cloud providers. Charlie said that nodejitsu fully supports native modules and that he believes all providers should do the same. His argument was there was just too many flavors of distros / configurations and that building was the only sure fire way to ensure it works.

Day 2

The crux of the day 2 panel was about whether or not to use OOP libraries in JavaScript. The overall tone was don’t do it and embrace prototype, though a few folks thought it was perfectly fine to use OOP constructs. The second discussion was about coffeescript, should you use it or not. This came up in the context of Vicent’s talk on Hubot as it uses coffeescript. There was a split across the braod with the panelists. Some thought it was completely fine, others though it was horrible don’t do it and there were people in between. I spoke about my own personal pref which was to not use it in a published npm module though other than that it’s fine. One guy Domenic Denicola (from Barnes and Noble) pushed back and said if you are following James Halliday’s recommendation, then everything is a module thus you shouldn’t use Coffeescript at all.

Another question came up about which languages we use to develop. People cited various languages, but Domenic made an interesting comment, saying the only language here cares to learn nowadays is JavaScript and that with server support he needs nothing else. Domenic was a little biased as a lot of the work he is doing is mobile app based, where he is just using simple web apis on the backend and calling to other infrastructure. Several others agreed with him though that for many things, JS is sufficient today.

Finally the last question was directed to speakers asking them which cloud platform they prefer. A few of the foreign speakers said they use nodejitsu, though I definitely said Azure :-). I also said that I have used nodejitsu and it is nice for a very targetted node experience. I said that Azure offers the best hybrid cloud solution (PASS, IAAS, and On Prem support) and that I had heard that hybrid cloud was very important to several local folks I had spoken to.

Hackathon

Following the event we had a full day hackathon at PeopleSquared where both speakers and attendees came together for some hardcore hacking.

Bob Zheng of People Squared welcoming folks

Different teams at the hackathon

We started off using an open space style with folks suggesting projects they wanted to work on. Then everyone voted or the sessions they liked and teams were split.

My team was with Tim Caswell where we were building a new tool called “crate” which will packages up a node application, it’s modules and node itself into a self-extracting / executing exe.. I helped with the design and actually picked the name, but unfortunately had to leave early due to a family commitment. The project is up on gitcafe, a new github like offering in China.

Several other really cool projects came out of the hackathon including a gravatar type service which has an avatar that changes dynamically based on your mood or time of the day.

The best part about the hackathon for me was the collaboration between local and international developers working together toward solving a problem.

Summary

HuJS was a fantastic experience. It was amazing to see developers from the opposite ends of the world with very different languages and customs come together, share and learn.

For the folks in China, I hope they came away feeling that although an ocean separates us, we have many things in common should work together. For folks that came from the US/Europe etc, remember when you are on the node / JavaScript lists that you have friends in China and think about their needs as developers. Do your best to bridge the gaps!

And as for the organizers, we hope this is the first of many China JS events to come. I am talking to you Beijing!

Thank you to our sponsors, our organizers, our speakers and our attendees who made HuJS a possibility!

The book

If you’ve watched any of my recent talks, one book you may have heard me mention quite a bit is “REST In Practice” by Jim Webber, Savas Parastatidas and Ian Robinson. The thing I really like about this book that differentiates it from others on REST is that it broaches the concept of REST in the enterprise. This book is essential for those building RESTful business systems. It’s a book that has helped my own understanding tremendously and one that I find practically by my bedside. It covers topics that are of concern for folks taking business systems and moving them to a RESTFul architecture style. It also gradually takes people across the chasm from traditional RPC style services, to HTTP CRUD services and then finally to RESTful services. It is by no means the only book on the topic, as there are other good resources which I would also recommend like Subbu’s work, RESTful Web Services Cookbook.

RestBucks, the sample

One of the cornerstones of the book is a reference application called RestBucks, an application which takes the process of ordering coffee and exposes it electronically over HTTP using a set of RESTful services. The sample provides a great reference point for folks building RESTful systems to see how to address various concerns in a RESTful way. One key thing it addresses is the topic of hypermedia constraints / driving business processes through linking. Throughout the book we see various aspects of the sample implemented in various languages including Java and .NET. The .NET pieces utilize WCF 4.

Now available for WCF Web API thanks to the community

Since we announced WCF Web API, several folks have been asking for a port of RestBucks using our new web api. I actually started working on one, but due to other constraints never got around to making much progress. Fortunately folks from the community have stepped up to the plate! Several months back, Szymon Pobiega went and ported the RestBucks code to use web api: https://github.com/SzymonPobiega/restbucks-wcf. Checking the site, Szymon just recently updated the bits to Preview 4.

More recently, José Romaniello of Tellago released another port of RestBucks which he blogged about. One nice thing about this port is it includes documentation pages for the various links as well as some good content via his growing blog series which you can access here: http://joseoncode.com/category/restbucks/. Jose and I have chatted and he is committed to really fleshing out the docs and keep the sample alive. Another cool thing is that José has deployed RestBucks on App Harbour so you can access it live here! restbuckson.net. Remember these are RESTful apis, not web pages. You’ll want to use something like Fiddler to navigate the app. If you want to get the source, you can download it at the codeplex site here: http://restbucks.codeplex.com/. BTW, he’s looking for commtters

Big thanks to Szymon and José for the port!

Get the book, get the samples!

]]>https://blogs.msdn.microsoft.com/gblock/2011/06/06/rest-in-practice-the-book-restbucks-the-sample/Message Handlers vs Operation Handlers which one to use?http://feedproxy.google.com/~r/MyTechnobabble/~3/RQdWFBQrATU/
Tue, 17 May 2011 13:46:58 +0000https://blogs.msdn.microsoft.com/gblock/2011/05/17/message-handlers-vs-operation-handlers-which-one-to-use/In WCF Web API we have 2 extensibility points which seem very similar. They both are designed for very different reasons. Recently the question of which to use when popped up on our forums. Below is some guidance.

Message Handlers

Message handlers (Currently DelegatingChannel but will be renamed to DelegatingHandler) only deal with http messages (hence the name), you can use them to modify the request or the response or to stop the request and immediately return a response. So in general you use message handlers for addressing http specific concerns, for example:

Setting accept headers based on the uri (I do this in the ContactManager)

Checking for API keys as in Pablo's (@cibrax) Handler here, or handling other security concerns like OAuth. (Our OAuth integration in the future will sit here.)

A few things to note about message handlers

They are configured per host not per operation

They sit very low in the WCF stack in the channel layer making them ideal for security level concerns.

They are arranged in a russian doll model meaning each handler calls to it's inner handler. This lets you do plug in pre-post handling before the handler is called.

They only support async (Task<T>)

Operation Handlers

Opeation handlers (HttpOperationHandler)deal with taking the message and manufacturing the parameters for an operation or vice versa. It is the responsibility of the operation handler pipeline to ensure that for every parameter in the method there is a corresponding value it is also the responsibility of the pipeline to take any return/out values and modify the message appopriately with the correct representation. Those values can either be pulled from the message itself (such as parsing the uri into an object) or from custom logic like talking to a respository. The exception here is the value that represents the content of the message. It is the responsibility of the Formatters to handle taking the body and turning into an object or vice versa. Common use cases:

Taking a portion of the URI and manufacturing an object. An example of this would be taking a uri like http://maps.com/location/100,00 and turning it into a Location object with latitude / longitude properties.

Taking the content and compressing / decompressing it with gzip. Darrel Miller (@darrel_miller) has a nice example of this

Logging specific parameters of an operation

Validating the body, i.e. someone posts a customer in one of multiple formats. The formatter runs and turns it into a customer (based on conneg) and then the handler validates the model to see that the value are correct.

Here's the things to note about operation handlers.

They are configured per operaiton not per host

Manufacture values which are used by other handlers in the pipeline or by the operation

They are arranged in a pipeline where each handler is called in succession but one handler does not call the other.

They only support sync in the box

They can stop the pipeline by throwing an HttpResponseException<T>

Which to use really depends on your scenarios. If you are trying to address pure http concerns use message handlers, if you are trying to address application level concerns that work with models etc, use operation handlers.

]]>https://blogs.msdn.microsoft.com/gblock/2011/05/17/message-handlers-vs-operation-handlers-which-one-to-use/Using DataContracts with WCF Web APIhttp://feedproxy.google.com/~r/MyTechnobabble/~3/bZ9hABDebc4/
https://blogs.msdn.microsoft.com/gblock/2011/05/14/using-datacontracts-with-wcf-web-api/#commentsSat, 14 May 2011 19:45:37 +0000https://blogs.msdn.microsoft.com/gblock/2011/05/14/using-datacontracts-with-wcf-web-api/A few folks have been asking if it is possible to serialize/deserialize using the DataContractSerializer. Rest assured, yes it is possible. Now whether or not it is the easiest/most intuitive model well that it is a different question.

If you read to the end the post you will see a bunch of extensions which have been pushed to webapicontrib and make this all MUCH easier (at least I think). If you don’t jump you’ll read below what you need to do if you do it yourself.

It stars with the SetSerializer<T> method which is expose on the XmlMediaTypeFormatter. You can call this method and pass a DataContractSerializer for T. That tells the formatter use this serializer not the default. You need to give it the exact information, for example if you are returning List<Contact> then make T List<Contact> NOT T. If you want to support both, you need to register both.

To allow you to get to the formatter, the MediaTypeFormatterCollection exposes an XmlFormatter property. You might now be thinking, “Oh that’s easy, but how do I get a hold of the formatter collection?”

Good question. It depends on whether you are on the server or on the client.

On the Server

The easiest way to configure things on the server is to annotate your service/resource with a [DataContractFormat] attribute. Once you do then we will automatically use the DataContractSerializer.

If you do not want to annotate your service, the alternative option is to do it in code. If you are using HttpHostConfiguration you can get to the formatters by accessing the OperationHandlerFactory property on the config class. If you are using the fluent API, you’ll want to cast to HttpHostConfiguration in order to access it. For example see the code below

var config = new HttpHostConfiguration();

var formatter = OperationHandlerFactory.XmlFormatter;

formatter.SetSerializer<Contact>(

new DataContractSerializer(typeof(Contact)));

formatter.SetSerializer<List<Contact>>(

new DataContractSerializer(typeof(List<Contact>)));

The code above is setting the formatter to use the DCS for Contact and List<Contact> both for reading and writing.

On the client

On the client side how you configure depends on if you are reading or writing. In either case both steps involve calling SetSerializer<T> on the XmlMediaTypeFormatter.

For reading you will create the formatter and pass it to the ReadAs<T> method of the HttpContent which is accessed off of the Content property on HttpResponseMessage.

var formatter = new XmlMediaTypeFormatter();

formatter.SetSerializer<List<Contact>>>(

new DataContractSerializer(typeof(List<Contact>))));

var contact = response.Content.ReadAs<T>(

new List<MediaTypeFormatter> { formatter });

If you are writing to the request, you will access the formatters collection on the ObjectContent<T> instance.

var content = new ObjectContent<Contact>();

content.Formatters.SetSerializer<Contact>(

new DataContractSerializer(typeof(Contact)));

client.Post(content);

New extension methods in webapicontrib to make life easier

Now that we’ve seen how you can do it, we can wrap this up in a bunch of helper extension methods and life gets a whole lot easier. I did that and pushed them to webapicontrib

Check the code below from the new DataContractExample in webapicontrib.

https://blogs.msdn.microsoft.com/gblock/2011/05/14/using-datacontracts-with-wcf-web-api/feed/1https://blogs.msdn.microsoft.com/gblock/2011/05/14/using-datacontracts-with-wcf-web-api/Hypermedia and formshttp://feedproxy.google.com/~r/MyTechnobabble/~3/Fw2Rz6TCFEY/
https://blogs.msdn.microsoft.com/gblock/2011/05/08/hypermedia-and-forms/#commentsSun, 08 May 2011 17:45:59 +0000https://blogs.msdn.microsoft.com/gblock/2011/05/08/hypermedia-and-forms/Updated (with a lot of new content)

One challenge when building REST based systems is how can the client determine what it can do next? There can be any number of clients each which need to interact with a system. How do they know HOW to interact? The WSDL approach is to offer a static snapshot of method calls on an API. That approach couples the client heavily to knowing everything about the server, including being coupled as to how things get processed. It inhibits evolvability. A change to the server usually results in breaking all clients. In a world where there are many 3rd party apps across devices consuming your server application this can be detrimental making it extremely difficult to move forward.

Hypermedia

Hypermedia (also referred to as Hypertext) is an answer. The word may sound scary but it basically means use links. When Tim Berners Lee, Roy Fielding, and others (Al Gore ) were envisioning the Web, linking was a key component of that design. We’re used to seeing links in UI contexts like a browser, i.e. using a web based ordering system you can click on an “Add Item” link to add the item. But what about in a non-UI context? Well guess what, you can use links there as well.

With a hypermedia approach your server doesn’t only return data. It returns data + links. Those links provide a means for the client to discover the available sets of options that make sense based on where the client is at in the application. A link has 2 standard components, a url for the link and a rel. REL in this case stands for relation and describes to the client how the link relates to the current resource. For example, let’s take a catalog / shopping cart experience exposed in a RESTful way. When browsing the catalog, you get back a list of items. Each item can have links for adding the item to a cart. Below is one such item.

<item>

<productNo>1</itemNo>

<description>An item</description>

<price>$19.95</price>

<sku>123456789</sku>

<linkrel="rc:additem"url="/cart/12456/items"/>

</item>

Above the server returned a link for adding an item. Unlike an operation in a WSDL, that link it not static, and hardcoded as part of the contract. The server is offering the links that make sense at specific point in time based on the application and resource state. Assuming for example that “An Item” is out of stock, the client will not get a link for adding and item. Instead it may get a link to place the item on backorder. The client however doesn’t have any knowledge of the rules on the server. It doesn’t know that because the item is out of stock, it can’t add it. All it knows is that rel=”rc:additem” is not present. The server logic might be more sophisticated, maybe there are limits set on how much I can order of a specific item. From the client perspective though all it knows is the link is not present. That’s decoupling.

What about evolvability? The beauty of linking is that clients only care about the links they know, they don’t care about the links they don’t. That means as the server evolves it can offer additional links. For example we may decide to offer clients the ability to put an item in a wish-list rather than in the shopping cart. That means an additional link is returned such as the following:

<linkrel="rc:addtowishlist"url="/wishlist/gblock"/>

Older clients won’t know about the wish list. They will however continue to function. Newer clients will easily consume the new functionality. If you imagine a world with many different clients consuming (which is real), that is huge! There’s a further benefit for the older clients. The server has advertised the fact that there are new capabilities. Clients can be designed to track links they don’t know such that they can log that there are new capabilities available.

An additional benefit of the link approach has to do with the urls themselves. Clients are agnostic to to building urls, thus the server can completely change the format of the url, or even where the urls point to without the client being impacted _at_all.

Jon Moore, a Technical Fellow at Comcast has a great talk / demonstration on how this works which he delivered at Oredev. I highly recommend it.

What to with a link and why forms are interesting?

So now we have links. But there’s one problem, how does the client know what to do them? Which HTTP method should be used, is it an HTTP GET, POST, etc? If it is a POST what type of content should be sent and in what form? Another challenge is around versioning, once a service has been updated, how do the clients know know how to upgrade that client to take advantage of the new functionality.

One solution is simple, read the spec. When you build hypermedia based systems you aren’t simply returning a media type (Content-Type header value) of “application/xml” or “application/json”. Rather you are returning something more specific such as “application/atom+xml” or something even more domain specific such as in tje new “REST in Practice book” where the system uses the media type of “application/vnd-restbucks+xml”. Each media type has an RFC/spec associated with it. Thus if a client author reads the spec, it should document what to expect from a links perspective and which method / media type to use.

That is one approach and it absolutely works. The downside of that approach is the client has to be encoded with specific knowledge of how to work with each link.

That is not the only approach however and one really interesting one is Forms.

Forms

If we look to HTML we can see some hints on how to address these issues. In a web page, if you click a link such as “Add Item”, the common paradigm is for the server to return an HTML form (the client may also display one through some client script). That form provides information to the human who is reading the site to help them move forward. The Form contains an action that points to uri, and method, as well as other fields which may be either prefilled by the server, or require input from the user. The model works basically in a seem-less fashion. The site can freely evolve as it can offer different forms to different users based on a number of factors including what “version” of the app they are using, their user profile etc.

Now imagine there is no human. There’s a machine client that is consuming some resources over HTTP. That machine may be jQuery code running in a browser, maybe be code executing in a rich client app such as WPF, Silverlight, Flash, or even a Java applet. Or it may be headless server code that is accessing that information. Ultimately the information will be likely surfaced to some human, but that is after some initial processing has taken place. For example there might be an agent that automatically processors orders using a 3rd party fulfillment service.

We can design electronic forms which our servers offer to clients to guide them on how to use the link to transition to the next state.

With this approach, each link is a GET that returns an electronic form. That form specifies url, method as well as pre-filled information from the server such as Item details. The form may also specify required fields such as a Quantity field.

The advantage of forms is they allow the client to be less coupled to knowledge of how to work with links, which yields greater evolvability.

For example for adding item '1' to a shopping cart, the server may return the client a link such as the following.

<linkrel="rc:AddItem"url="/restcart/forms/additem/1"/>

The client then automatically does a GET on the link and retrieves an AddItem form that looks like the following.

<form>

<url>/restcart/cart/12456</url>

<method>POST</method>

<mediatype>vnd-restcart+xml</mediatype>

<item>

<productNo>1</itemNo>

<description>An item</description>

<price>$19.95</price>

<sku>123456789</sku>

</item>

<required>

<quantity>1</quantity>

</required>

</form>

This form offers the client everything it needs to move forward. It does not have to build up the item information to POST as it has already been sent by the server. The server also specifies the media type for the client to use. This form additionally specifies that the client must provide a quantity.

To move forward, the client will POST to the specified url using the included item as the body. As required fields are specified, the client must supply those as well.

The server can also embed a token in the response in order to ensure the price of the item is valid for a period of time (say 2 hours). If there is no such guarantee or the time has expired, the server can return a status code of 409 (Conflict) to the client indicating that it needs to refresh the form.

*Note: There are alternative approaches to exposing the cart as a resource, for example the CART state could be maintained on the client and sent with the POST. In either case it would not rely on session state.

Summary

Hypermedia allows clients and servers to independently evolve. In this approach the server offers clients well known links at every stage of the client/server interaction in order to to guide the client as to what they can do next.

There are various approaches to implementing a hypermedia based system, including using electronic forms.

WebSockets

In the talk they spoke about new explorations we are doing into supporting WebSockets in the Microsoft Stack. WebSockets is basically sockets for the web, i.e. a way to have a much more responsive in-browser experience. In the talk you’ll learn about the work going on in the W3C/IETF around WebSockets and you’ll see the exploratory work we’ve been doing both at the browser level and on the server to enable a WebSockets experience. I highly recommend this talk, as you’ll see is a pattern for the other talks I mention below.

Here’s a sneak peak of what the coding experience looks like, which I hope you agree is pretty clean.

RIA/JS

In the talk he unveiled new work the RIA team is doing is to allow consuming RIA services in jQuery! Watch the talk. Brad demonstrates how you can build a rich jQuery front end that dynamically pulls down all of it’s data to the client using RIA. RIA/JS then integrates all the state-tracking/validation richness and such using jQuery validators. If you are building data-centric jQuery based applications then RIA can take care of a lot of the dirty work for you making your job easier*. This also highlights the value of RIA in that I can reuse my existing domain services now both across Silverlight and Javascript! I know my previous boss (who is deeply missed) Brad Abrams must be very happy seeing that this is being developed!

*We’re not saying RIA/JS is the panacea for all evils, but it certainly is for some

Here are some of the key scenarios being supported in the first preview.

Query support with sorting, paging, filtering

Changeset submission with two edit modes:

“Implicit Commit” or “write-thru edits” where updates to field values are automatically submitted to your DomainService. This can enable a Netflix.com or Mint.com experience where field changes are submitted to the server as soon as the user tabs out of the field.

“Buffered Changes” mode, where your code must call commitChanges() to submit your changeset to the DomainService. Those that have used the Silverlight client will be most familiar with this mode - a submit button is often used to trigger the commit.

Data change events including collection changes

Client-side change tracking with an entity state API for determining what entities and properties have been edited

Entity data model support that allows you to navigate through associated entity properties and collections

WCF Web API and Preview 4

In the talk I covered the evolution of systems that are now making their capabilities directly available over HTTP including social sites and line of business / enterprise apps. I showed progress we’ve made since PDC on our new WCF addition. The advances have been significant including greatly improving the out of the box experience, enhancing the fidelity you can have when working with HTTP, improving the configuration story and improving testability.

For example I demonstrated how using our new nuget package, you quickly you can build a simple service that can return and consume xml, json as well as consume form url encoding content. I then throw an existing jQuery front end on top without having to touch the server side configuration. I then demonstrated how I could easily take my existing service and quickly refactor it to use an IoC container, in this case Autofac, by taking advantage of a new fluent api (which is still a work in progress).

In the second part of my talk, I demonstrated how having rich control of formats/media types can benefit you. This includes showing my resource returning OData, as well a vcard implementation which I use to import contacts directly into gmail. Try that with SOAP, or plain-old XML/JSON, I dare you!

Here’s a snippet of some of the HTTP fidelity I am talking about. Below my service takes advantage of our new HttpResponseMessage<T> and HttpResponseException to return a model AND work with HTTP at the same time.

[WebGet(UriTemplate = "{id}")]

public HttpResponseMessage<Contact> Get(int id)

{

var contact = this.repository.Get(id);

if (contact == null)

{

var response = new HttpResponseMessage();

response.StatusCode = HttpStatusCode.NotFound;

response.Content = new StringContent("Contact not found");

thrownew HttpResponseException(response);

}

var contactResponse =

new HttpResponseMessage<Contact>(contact);

//set it to expire in 5 minutes

contactResponse.Content.Headers.Expires =

new DateTimeOffset(DateTime.Now.AddSeconds(30));

return contactResponse;

}

The Get method returns a contact and sets caching headers. If the contact is not found however it throws an HttpResponseException passing in an HttpResponseMessage. In this case I am just returning a string, but you have full access to set the response headers/content to whatever you want. Also you might not have caught it, but the id parameter is now passed in as integer. In previous drops as well as in WCF HTTP you had to use a string. Well now we do automatic type conversion. You asked for it, and you got it!

Nuget and Preview 4

After my talk, I am happy to announce that we pushed our next release of WCF Web API (yes drop the s) to Codeplex along with a new set of Nuget packages (webapi.all will get all of ours). You’ll notice a 6th package, WebApi.CrudHttpSample from Steve Michelotti. Rock on Steve! Here’s a link to more on that package.

Preview 4 is loaded with new features much of which is based on your feedback! We’ve been developing them for a while and finally are able to get them out there. All of our apis have been significantly evolved in this release.

]]>https://blogs.msdn.microsoft.com/gblock/2011/04/24/websockets-riajs-and-wcf-web-api-at-mix-a-whole-lotta-love-for-the-web/feed/1https://blogs.msdn.microsoft.com/gblock/2011/04/24/websockets-riajs-and-wcf-web-api-at-mix-a-whole-lotta-love-for-the-web/Adding vcard support and bookmarked URIs for specific representations with WCF Web Apishttp://feedproxy.google.com/~r/MyTechnobabble/~3/Qi_-6frxrJ8/
Mon, 07 Mar 2011 06:51:11 +0000https://blogs.msdn.microsoft.com/gblock/2011/03/07/adding-vcard-support-and-bookmarked-uris-for-specific-representations-with-wcf-web-apis/REST is primarily about 2 things, Resources and Representations. If you’ve seen any of my recent talks you’ve probably seen me open Fiddler and how our ContactManager sample supports multiple representations including Form Url Encoded, Xml, Json and yes an Image. I use the image example continually not necessarily because it is exactly what you would do in the real world, but instead to drive home the point that a representation can be anything. Http Resources / services can return anything and are not limited to xml, json or even html though those are the most common media type formats many people are used to.

One advantage of this is your applications can return domain specific representations that domain specific clients (not browsers) can understand. What do I mean? Let’s take the ContactManager as an example. The domain here is contact management. Now there are applications like Outlook, ACT, etc that actually manage contacts. Wouldn’t it be nice if I could point my desktop contact manager application at my contact resources and actually integrate the two allowing me to import in contacts? Turns out this is where media types come to the rescue. There actually is a format called vcard that encapsulates the semantics for specifying an electronic business card which includes contact information. rfc2425 then defines a “text/directory” media type which clients can use to transfer vcards over HTTP.

Notice, it is not xml, not json and not an image It is an arbitrary format thus driving the point I was making about the flexibility of HTTP.

Creating a vcard processor

So putting two and two together that means if we create a vcard processor for our ContactManager that supports “text/directory” then Outlook can import contacts from the ContactManager right?

OK, here is the processor for VCARD.

publicclass VCardProcessor : MediaTypeProcessor

{

public VCardProcessor(HttpOperationDescription operation)

:base(operation, MediaTypeProcessorMode.Response)

{

}

publicoverride IEnumerable<string> SupportedMediaTypes

{

get

{

yield return "text/directory";

}

}

publicoverridevoid WriteToStream(object instance,

Strea stream, HttpRequestMessage request)

{

var contact = instance as Contact;

if (contact != null)

{

var writer = new StreamWriter(stream);

writer.WriteLine("BEGIN:VCARD");

writer.WriteLine(string.Format(

"FN:{0}", contact.Name));

writer.WriteLine(string.Format(

"ADR;TYPE=HOME;{0};{1};{2}",

contact.Address, contact.City,

contact.Zip));

writer.WriteLine(string.Format(

"EMAIL;TYPE=PREF,INTERNET:{0}",

contact.Email));

writer.WriteLine("END:VCARD");

writer.Flush();

}

}

publicoverrideobject ReadFromStream(Stream stream,

HttpRequestMessage request)

{

thrownew NotImplementedException();

}

}

The processor above does not supporting posting vcard, but it actually could.

Right. However there is one caveat, Outlook won’t send Accept headers , all it has a “File-Open” dialog. There is hope though. It turns out that the dialog supports uris, thus as long as I can give it uri which is a bookmark to a vcard representation we’re golden.

This is where in the past it gets a bit hairy with WCF. Today to do this means I need to ensure that my UriTemplate has a variable i.e. {id} is fine, but then I have to parse that ID to pull out the extension. It’s ugly code point blank. Jon Galloway expressed his distaste for this approach (which I suggested as a shortcut) in his post here (see the section Un-bonus: anticlimactic filename extension filtering epilogue).

In that post, I showed parsing the ID inline. See the ugly code in bold?

[WebGet(UriTemplate = "{id}")]

public Contact Get(string id, HttpResponseMessage response)

{

int contactID = !id.Contains(".")

? int.Parse(id, CultureInfo.InvariantCulture)

: int.Parse(id.Substring(0, id.IndexOf(".")),

CultureInfo.InvariantCulture);

var contact = this.repository.Get(contactID);

if (contact == null)

{

response.StatusCode = HttpStatusCode.NotFound;

response.Content = HttpContent.Create("Contact not found");

}

return contact;

}

Actually that only solves part of the problem as I still need the Accept header to contain the media type or our content negotiation will never invoke the VCardProcessor!

Another processor to the rescue

Processors are one of the swiss army knives in our Web Api. We can use processors to do practically whatever we want to an HTTP request or response before it hits our operation. That means we can create a processor that automatically rips the extension out of the uri so that the operation doesn’t have to handle it as in above, and we can make it automatically set the accept header based on mapping the extension to the appropriate accept header.

And here’s how it is done, enter UriExtensionProcessor.

publicclass UriExtensionProcessor :

Processor<HttpRequestMessage, Uri>

{

private IEnumerable<Tuple<string, string>> extensionMappings;

public UriExtensionProcessor(

IEnumerable<Tuple<string, string>> extensionMappings)

{

this.extensionMappings = extensionMappings;

this.OutArguments[0].Name = HttpPipelineFormatter.ArgumentUri;

}

publicoverride ProcessorResult<Uri> OnExecute(

HttpRequestMessage httpRequestMessage)

{

var requestUri = httpRequestMessage.RequestUri.OriginalString;

var extensionPosition = requestUri.IndexOf(".");

if (extensionPosition > -1)

{

var extension = requestUri.Substring(extensionPosition + 1);

var query = httpRequestMessage.RequestUri.Query;

requestUri = string.Format("{0}?{1}",

requestUri.Substring(0, extensionPosition), query);;

var mediaType = extensionMappings.Single(

map => extension.StartsWith(map.Item1)).Item2;

var uri = new Uri(requestUri);

httpRequestMessage.Headers.Accept.Clear();

httpRequestMessage.Headers.Accept.Add(

new MediaTypeWithQualityHeaderValue(mediaType));

var result = new ProcessorResult<Uri>();

result.Output = uri;

return result;

}

returnnew ProcessorResult<Uri>();

}

}

Here is how it works (note how this is done will change in future bits but the concept/appoach will be the same).

First UrlExtensionProcessor takes a collection of Tuples with the first value being the extension and the second being the media type.

The output argument is set to the key “Uri”. This is because in the current bits the UriTemplateProcessor grabs the Uri to parse it. This processor will replace it.

In OnExecute the first thing we do is look to see if the uri contains a “.”. Note: This is a simplistic implementation which assumes the first “.” is the one that refers to the extension. A more robust implementation would look after the last uri segment at the first dot. I am lazy, sue me.

Next strip the extension and create a new uri. Notice the query string is getting tacked back on.

Then do a match against the mappings passed in to see if there is an extension match.

If there is a match, set the accept header to use the associated media type for the extension.

Return the new uri.

With our new processors in place, we can now register them in the ContactManagerConfiguration class first for the request.

publicvoid RegisterRequestProcessorsForOperation(

HttpOperationDescription operation, IList<Processor> processors,

MediaTypeProcessorMode mode)

{

var map = new List<Tuple<string, string>>();

map.Add(new Tuple<string, string>("vcf", "text/directory"));

processors.Insert(0, new UriExtensionProcessor(map));

}

Notice above that I am inserting the UriExtensionProcessor first. This is to ensure that the parsing happens BEFORE the UriTemplatePrcoessor executes.

And then we can register the new VCardProcessor for the response.

publicvoid RegisterResponseProcessorsForOperation(

HttpOperationDescription operation,

IList<Processor> processors,

MediaTypeProcessorMode mode)

{

processors.Add(new PngProcessor(operation, mode));

processors.Add(new VCardProcessor(operation));

}

Moment of truth – testing Outlook

Now with everything in place we “should” be able to import a contact into Outlook. First we have Outlook before the contact has been imported. I’ll use Jeff Handley as the guinea pig. Notice below when I search through my contacts he is NOT there.

Now after launching the ContactManager, I will go to the File->Open->Import dialog, and choose to import .vcf.

Click ok, and then refresh the search. Here is what we get.

What we’ve learned

Applying a RESTful style allows us evolve our application to support a new vcard representation.

Using representations allows us to integrate with a richer set of clients such as Outlook and ACT.

WCF Web APIs allows us to add support for new representations without modifying the resource handler code (ContactResource)

We can use processors for a host of HTTP related concerns including mapping uri extensions as bookmarks to variant representations.

WCF Web APIs is pretty cool

I will post the code. For now you can copy paste the code and follow my direction using the ContactManager. It will work

What’s next.

In the next post I will show you how to use processors to do a redirect or set the content-location header.

]]>https://blogs.msdn.microsoft.com/gblock/2011/03/07/adding-vcard-support-and-bookmarked-uris-for-specific-representations-with-wcf-web-apis/WCF Web APIs Roundup–Volume 2http://feedproxy.google.com/~r/MyTechnobabble/~3/hEUneqkFKFw/
Sun, 06 Mar 2011 16:37:52 +0000https://blogs.msdn.microsoft.com/gblock/2011/03/06/wcf-web-apis-roundupvolume-2/Been a while since our last roundup and since my last blog post for that matter! Fortunately there’s been a bunch of cool stuff going on in the community with our Web APIs that I can point you to. Great to see the community momentum pick up as we cross the 5000 download mark on Codeplex!

WCF HTTP – Processors under the hood – Gustavo gives a deep overview of how processors work. He also shows how to build several processors including a processor that will convert uri parameters into native types.

Extensions – WCF HTTP Services – Funnel Web is an OSS blog engine. This post shows you how you can plug in services which are hosted in FunnelWeb and which have DI support through Autofac via the new IoC extensions in Preview 3.

Authenticating clients in the WCF HTTP Stack – In this Pablo Cibraro introduces some extensions for WCF Web APIs based on previous interceptors he had for the REST Start Kit which are used for authenticating clients. He says he believes the new Web APIs offer much richer support for building HTTP oriented services in NET. We agree