This post has been updated on 21-Oct-2010 with new information contained in the last paragraph. Most developers realize you can bring in revenue through advertising, and the Microsoft Ad Control lets you do that easily in your Windows Phone 7 application. For most simple apps, you might offer a trial version that is fully functional but supplemented with ads, and a paid version that is ad free. Or you might just make your app free, supported only via ads. This is what I wanted to do with SqueakBox. I’m beginning to rethink this approach, though, and I’ll explain why. Before I explain why I’m rethinking this decision, let me explain why I thought this would be ideal. First, I’m a big believer in having trial versions of applications. This is proven with Xbox Live. To make it super simple, there is a trial API available. So this might sound harsh, but I don’t think there is much excuse for not providing a trial when it is so easy to do. I will say this: providing a trial does open you up a bit more in the ratings, since anyone will be able to rate your application. To make the trial better, I figure it should be fully functional and supported by an ad so there is no time bomb, etc. Regular users would rather pony up the 99 cents to get rid of the ad, while someone else would be content with seeing the ads – either way, the developer and the user wins. Above is a picture of the app running with the test ad (bing). If the user has the full version (IsTrial==false) we want to remove the ad – in order to do this, you must completely remove the ad control from the visual tree… you can’t just set it to disabled, set its visibility to collapsed, etc. I realized this after using Fiddler (as I mentioned in my last post) – essentially, the ad control continued to request ads even when disabled/collapsed: Obviously, this isn’t what you want to do in the full version! Fortunate that is an easy fix by removing the control. The next one, unfortunately, is why I might go back to the more traditional trial model. In the marketplace, applications list the capabilities of the phone they use. When I saw the app published, I noticed this: Whoa! Owner Identity and Phone Calls? No I don’t. Seeing an app that uses the sensors, media library, push notification, etc. – that sounds cool. But owner identity and phone calls? That sounds disconcerting to me. It turns out this is from the ad control. What does owner identity mean? I don’t know. This might be benign, like, “the ad is able to set a cookie that can identify the phone” – perhaps if you click through an ad that takes you to the marketplace, it has to know it’s *you*, so that’s what it means? Maybe the phone calls is for ads that allow you to make a phone call – like Bing 411. As the developer, I don’t know, and the user certainly wouldn’t know. (I got one factual but unhelpful internal reply which basically said, “developers can use another ad provider if they don’t like it.”) (Read 10/21 update below.) Users have no way of realizing this isn’t required in the paid version, so I’m left with a choice of leaving it in figuring users won’t really care, or taking it out to remove the requirement. What would you do? Other than the phone capabilities piece, the ad control is super nice and easy to integrate. [Edit] One thing I do want to point out: showing the features that an application uses is really a great idea. I don’t know how this works on other platforms, but seeing up front that an app needs access to data, sensors, etc., is great. [EDIT 21-Oct-2010] I just received an update from the Microsoft Advertising adCenter team. The Ad Control doesn’t use Owner Identity, and instead uses anonymized profile targeting parameters – such as age/gender). Also, it doesn’t use the “phone calls” feature but does use the dialer in “click 2 call” functionality (as I suspected). They are working with the marketplace team to get this updated so the capabilities list isn’t as alarming – when/if it will be changed? I’m not sure. So while the end user experience is still what it is, this is good info and was I happy to get the correct information.

On an internal alias, Eric Lawrence chimed in on setting up Fiddler for the Windows Phone 7 emulator. This is really useful, of course, for monitoring traffic to/from the emulator … since so many apps are use data services, this is a must have feature. The first step is to set the local machine as a proxy … in the IE LAN settings (or via the control panel) like so: In the Fiddler options, configure the Fiddler listen port and allow Fiddler to accept remote connections: In my app (SqueakBox), I display a user guide as a web page. I made this choice to facilitate changes (much easier to publish a new HTML page!) … if I open that page up in the emulator: I can see the traffic in Fiddler: Success! In my next post, I’m going to discuss two snags I ran into with the Microsoft Ad Control – one of which I discovered while using Fiddler.

While I spend most of my time focusing on Windows Azure and cloud computing, naturally as a developer evangelist I need to dive into Windows Phone 7. I had an idea for an app (not overly original, I admit) and thought I’d explore developing this in my spare time. The result is an app called SqueakBox and it’s currently up in the Marketplace. So what’s this app? It’s pretty much the next gen of fart apps. Instead of just flatulence (which I never really found all that funny, but they are in there), the app would focus on sounds effects for different situations. My personal favorites are the crickets (great for meetings during long periods of silence) and the police siren (never fails to make the adrenaline of the driver skyrocket). While the idea isn’t incredibly original, I wanted the implementation to be best in class. I implemented a ton of different play modes. It can play on a timer, it can play when the device is moved. But more interestingly, it can play when the room is quiet, and you can play sounds remotely through squeakbox.net. You can loop sounds, create a playlist, and fiddle with knobs and dials like delays, backgrounds, etc. In total, there are about 65 sound effects. But wait, there’s more! :) In all seriousness, I get a bit annoyed by seeing these 99 cent apps that play ONE sound (there are already plenty of them in the marketplace). Not much of a value, in my opinion. So, I put a clip recorder in there so you can record your own clips, or import your own using the Zune library. If you see a cool clip on freesound.org or elsewhere, you can bring it in. So while I’ve done my best to get a well rounded collection of sounds, there might be something you want to include – so go for it. Sales pitch over. I’m going to create a new series of posts on some of the development challenges and document them here. In the first post, I’ll discuss setting up Fiddler and the phone ‘capabilities’ that applications will use. Stay tuned.

In my last post, I talked about creating a simple distributed cache in Azure. In reality, we aren’t creating a true distributed cache – what we’re going to do is allow each server manage its own cache, but we’ll use WCF and inter-role communication to keep them in sync. The downside of this approach is that we’re wasting n* more RAM because each server has to maintain its own copy of the cached item. The upside is: this is easy to do.
So let’s get the obvious thing out the way. Using the built in ASP.NET Cache, you can add something to the cache like so that inserts an object that expires in 30 minutes:
1 Cache.Insert("some key",
2 someObj,
3 null,
4 DateTime.Now.AddMinutes(30),
5 System.Web.Caching.Cache.NoSlidingExpiration);
With this project, you certainly could use the ASP.NET Cache, but I decided to you use the Patterns and Practices Caching Block. The reason for this: it works in Azure worker roles. Even though we’re only looking at the web roles in this example, it’s flexible enough to go into worker roles, too. To get started with the caching block, you can download it here on CodePlex.
The documentation is pretty straightforward, but what I did was just set up the caching configuration in the web.config file. For worker roles, you’d use an app.config:
1 <cachingConfiguration defaultCacheManager="Default Cache Manager">
2 <backingStores>
3 <add name="inMemory"
4 type="Microsoft.Practices.EnterpriseLibrary.Caching.BackingStoreImplementations.NullBackingStore, Microsoft.Practices.EnterpriseLibrary.Caching" />
5 </backingStores>
6
7 <cacheManagers>
8 <add name="Default Cache Manager"
9 type="Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager, Microsoft.Practices.EnterpriseLibrary.Caching"
10 expirationPollFrequencyInSeconds="60"
11 maximumElementsInCacheBeforeScavenging="250"
12 numberToRemoveWhenScavenging="10"
13 backingStoreName="inMemory" />
14 </cacheManagers>
15 </cachingConfiguration>
The next step was creating a cache wrapper in the application. Essentially, it’s a simple static class that wraps all the insert/deletes/etc. from the underlying cache. It doesn’t really matter what the underlying cache is. The wrapper is also responsible for notifying other roles about a cache change. If you’re a purist, you’re going to see that this shouldn’t be a wrapper, but instead be implemented as a full fledged cache provider since it isn’t just wrapping the functionality. That’s true, but again, I’m going for simplicity here – as in, I want this up and running _today_, not in a week.
Remember that the specific web app dealing with this request knows whether or not to flush the cache. For example, it could be a customer updates their profile or other action that only this server knows about. So when adding or removing items from the cache, we send a notify flag that instructs all other services to get notified…
1 public static void Add(string key, object value, CacheItemPriority priority,
2 DateTime expirationDate, bool notifyRoles)
3 {
4 _Cache.Add(key,
5 value,
6 priority,
7 null,
8 new AbsoluteTime(expirationDate)
9 );
10
11 if (notifyRoles)
12 {
13 NotificationService.BroadcastCacheRemove(key);
14 }
15 }
16
17 public static void Remove(string key, bool notifyRoles)
18 {
19 _Cache.Remove(key);
20
21 if (notifyRoles)
22 {
23 Trace.TraceWarning(string.Format("Removed key '{0}'.", key));
24 NotificationService.BroadcastCacheRemove(key);
25 }
26 }
The Notification Service is surprisingly simple, and this is the cool part about the Windows Azure platform. Within the ServiceDefinition file (or through the properties page) we can simply define an internal endpoint:
This allows all of our instances to communicate with one another. Even better, this all maintained by the static RoleEnvironment class. So, as we add/remove instances to our app, everything magically works. A simple WCF contract to test this prototype looked like so:
1 [ServiceContract]
2 public interface INotificationService
3 {
4 [OperationContract(IsOneWay = true)]
5 void RemoveFromCache(string key);
6
7 [OperationContract(IsOneWay = true)]
8 void FlushCache();
9
10 [OperationContract(IsOneWay = false)]
11 int GetCacheItemCount();
12
13 [OperationContract(IsOneWay = false)]
14 DateTime GetSettingsDate();
15 }
In this case, I want to be able to tell another service to remove an item from its cache, to flush everything in its cache, to give me the number of items in its cache as well as the ‘settings date’ which is the last time the settings were updated. This is largely for prototyping to make sure everything is in sync.
We’ll complete this in the next post where I’ll attach the project you can run yourself, but the next steps are creating the service and a test app to use it. Check back soon!

I promised awhile ago that I’d revamp the Worldmaps ranking system – classically, Worldmap Rank was always based off of total hits. While this rewarded longer-term usage, it made newcomers have virtually no chance of catching up and competing unless they owned a high volume site. The other problem with total hits was “rank parking” – that is, a website hits with a lot of volume, then either stops using the service, or pings it infrequently. There are two phases in coming up with a better rank – one is deciding what would be ideal, and the other is deciding if the technical implementation is feasible given the constraints of the current system. (Worldmaps doesn’t store each hit like a log file, for example, so data is aggregated to extrapolate various metrics that you see on the stats pages.) The first go around was to do hits per day. This metric (as you can see below) is nice because suddenly, even day 1 users are in the competition: It makes things more interesting. The problem with using both Total Hits and Hits Per Day is that neither is necessarily that compelling of a metric. (For those that have done web analytics in the past, you know that total website hits is hardly meaningful.) There’s nothing to stop someone from putting 100 image tags on their site, for example, so 1 hit registers as 100. (Though, that’s against the TOS. I think. If not it will be! :)) The problem with Hits Per Day is that the value is averaged out. Suppose you have a site where your volume over a period of 4 months increases substantially. If all you cared about was rank, you’d be better suited to delete and recreate the map account to get ranking higher. Then I thought about World Domination. Let’s face it, it’s the coolest stat Worldmaps has. But, is it the right one to use to determine rank? I’m not so sure. The only other solution is a compound metric … such as World Domination * Hits Per Day or something similar – that would still give the edge to traffic, but reaching out globally would impact that perhaps significantly. (A compound stat is a bit harder, given the schema, to work in.) Any suggestions?

Hard to believe, but it has been over 3 years since I spit out SiteSpider. Not that it's any earth-shattering tool, but really surprises me how fast time flies. The other day I was talking with a buddy about a couple of topics: control invokes to update controls on a form's main thread, and creating worker threads. The threading issue is top of mind for me right now in part because our debugging tips and tricks session at the MSDN event talks about debugging multithreaded applications.I decided to brush the dust off of SiteSpider and do a little polishing. So, what is SiteSpider? First a little history: years ago (I'm guessing 2001-2002 time frame) I was walking past the office of some developers I worked with in another division. While chatting, I noticed they were running a tool that scanned their content (it was an online encyclopedia, actually) and generated a list of 404's, etc. Now, while the tool they used was pretty feature complete, I figured the basic premise would be a fun side project -- and that's SiteSpider. You point it at a URL, and it crawls a site and generates a link tree, and allows you to look for broken links, slow pages, etc. It's a handy tool to run against your site a couple times a year.In this version, the biggest change is support for multiple threads. You can specify anywhere from 1 to 100 threads -- obviously, extending beyond 10 threads or so is something you need to be cautious of, but this speeds up the work considerably. The image below shows the new settings dialog:The request delay applies to the individual thread -- so with multiple threads, bear that in mind. In addition, the Max Page Size will limit the parsing of a page in case the spider hits a ridiculously large file.If you're running with the source code, you can see the threads in the threads window while broken into the debugger:In the above, there are 5 worker threads (all parked on the Fetch method). The design of the app had to change pretty significantly to support the change. This is largely because of the circular nature of links in a given site, and the coordination of work between threads. To facilitate that, an additional thread called Work Coordinator (seen above) manages the worker threads.The other nice (and simple) change is the history:... so for frequent sites, it's not just an empty text box. (They are in order from last used, and can be cleared in the settings dialog.)Another item I included in this project is a test website. I created this project primarily as a test bench so various conditions can be tested deterministically. The main MVC-based site essentially creates an infinitely-deep tree that can be crawled. Through parameters in the URL, you can specify the number of branches on each node, the current node, the current page on the current node, and the request delay. For example, suppose you would like to create a tree where each node on the tree contains 6 branches. If you'd like to start at the root of such a tree, with no page delay, the request would look like so:In this case, notice we're just sending /6/1 in the URL. A /2/1 would be a binary tree. In the case above, I specified a crawl depth of 6. In this case, this was a fairly deep test and queried about 56,000 pages in a little over 3 minutes (localhost helped out a lot, I admit!).The other two tests in the projects are a "bigpage" test, that allows you to test the max page size setting. The other is a test tree (similar in concept as the above, but not dynamic) that attempts to ferret out multithreaded issues and depth issues. For example, page A has links to B and C, and B has links to C and D. Page C has a 2 second "working" delay. There are a few other circular references in the tree, and it's easily modified for testing purposes. As it is now, a successful test should look like:So, that's all there is to it. The application _does_ save the response stream so it's possible to view that response, but it's just not coded yet. Another known issue: robots files are not honored. Ideally, this would be a configurable option. To download the executable, click here: SiteSpider_Binary.zipTo download the source and test project, click here:SiteSpider_Source.zip

In the 4.0 release of the .NET framework, one of the enhancements I'm most looking forward to is the extensibility of the ASP.NET cache. Up until 4.0, the caching system was built directly into ASP.NET, and although it was possible to use outside of ASP.NET, it certainly wasn't easy. And while .NET 2.0 (and up) has a ProviderBase class to use as a point of extensibility, it's a bit of a task if you want to build a robust caching provider. With 4.0, this will be much easier and more robust.But I'm not on 4.0 ... yet.I don't want to over-design a full fledged system, but what I needed to do was prototype a simple disk-based caching class. Essentially, a simple way to serialize objects to disk. While I do make extensive use of the ASP.NET cache in my project, I needed to supplement it because it's just not durable enough. Reboots, app recylces, memory pressure ... all of these are minor considerations if building these cached resources are not time intensive. But what if you want to cache a "large" item for a week? (I say large not just in memory footprint, but also in resources used to construct such an object.)So here were some of my considerations:- Cache object will be the main source of the cache. If cache miss, check "durable" cache. It cache miss, reconstruct.- K.I.S.S. 4.0 will solve a lot of my problems. Not looking to spend a lot of time on this. (Writing this blog post will take more time than writing the code.)- Need to implement async capabilities. - Need to implement proper locking. The first step was to prototype (below) a simple serialization mechanism that handles the serialization/deserialization to disk. .NET generics help make this a bit smoother, and I chose to use binary serialization primarily for speed over XML, but also flexibility. Error handling needs to be flushed out, and there's an assumption that there's a single folder (Globals.DiskCachePath) for all objects. Also, the class uses the HostingEnvironment instead of the HttpContext to MapPath since their won't be a context in an async thread. This behavior would be abstracted in a more robust solution ... but for simplicity, this works fine.
public class DiskCacher<T> { public DiskCacher() { } public static void SerializeToFile(string name, T container) { string filepath = HostingEnvironment.MapPath( Globals.DiskCachePath + name); Stream stream = null; try { stream = File.Open( filepath, FileMode.Create, FileAccess.ReadWrite, FileShare.None); BinaryFormatter formatter = new BinaryFormatter(); formatter.Serialize(stream, container); } catch (Exception ex) {#if DEBUG throw;#endif } finally { if (stream != null) { stream.Close(); } } } public static DateTime Deserialize(string name, out T item) { item = default(T); Stream stream = null; string filepath = HostingEnvironment.MapPath( Globals.DiskCachePath + name); if (!System.IO.File.Exists(filepath)) { return DateTime.MinValue; } try { System.IO.FileInfo fi = new System.IO.FileInfo(filepath); DateTime lastUpdated = fi.LastWriteTime; stream = File.Open(filepath, FileMode.Open, FileAccess.Read, FileShare.Read); BinaryFormatter bFormatter = new BinaryFormatter(); item = (T)bFormatter.Deserialize(stream); return lastUpdated; } catch (Exception ex) {#if DEBUG throw;#endif return DateTime.MinValue; } finally { if (stream != null) stream.Close(); } }
}The two methods are static for simplicity. I debated using an indexer to store/retrieve, but felt ultimately that functionality would belong to the provider, not the utility methods. Not worth investing in until 4.0. Using this is pretty simple. The type of the object becomes somewhat irrelevant (as long as it can be serialized via the binary formatter). The simplest example would be:string myName = "Brian";DiskCacher<string>.SerializeToFile("cachename", myName);... and then to rehydrate:string myName;DateTime lastUpdated = DiskCacher<string>.Deserialize("cachename", out myName);Returning the lastUpdated time allows the provider to make a decision as to whether or not expire the item. A bit more of a realistic example would be:List<Customers> customers = GetMyCustomers();DiskCacher<List<Customers>>.SerializeToFile("mycustomers", customers);and...List<Customers> customers;DateTime lastUpdated = DiskCacher<List<Customers>>.Deserialize("mycustomers", out customers);Generics are certainly not required for this type of solution but do make things easier. The provider itself would offer a cleaner interface for end users, but from a prototype perspective, this was pretty successful. Conclusion: I think .NET 4.0 will offer some great extensibility for disk-based caching and memcaching. Can't wait to play with it!