Want to run @home with Azure for another team or use a more powerful CPU? For the true geeks out there, running the Folding@home client involves tweaking, high performance computing, and knowing the difference between the GPU and CPU clients. We heard from a couple of folks about maximizing their Windows Azure usage, and Jim made some changes to the client piece to accommodate. In truth, we did a little of this last time we ran @home, but we didn’t draw much attention to it for fear it would just add confusion – so, this info is presented as optional and not necessary to do @home. First, when setting up the 90 day trial, accounts have a $0 spending limit – which means, unless you intentionally disable the cap, you will never be charged a dime. This also means your account will be shut down when you reach your monthly usage quota. The 90 day trial allows 750 compute hours per month, which is 1 small instance 24x7. If you’d like, you can run either 8 single-core instances, or a single 8-core instance, however you’ll burn 192 hours per day and exhaust the limit in about 4 days. You could also run a dual core instance for half a month. However, Visual Studio Ultimate MSDN subscribers receive 1,500 hours/month, so you can run either 2 instances of @home, or more preferably, a single dual core – or a quad core for half a month. It’s better for @home, and here’s why: Folding@home rewards speed over quantity. The faster a work unit is completed, the more points are awarded. Consequently, you and your team (by default, the Windows Azure team) does better! To do this, you first need a passkey. A passkey acts as a unique identifier for a user and is required to get these bonus points. In the project properties, you can add the passkey, and specify the team number. 184157 is the Windows Azure team, but we allow you to change this if you’re already on another team: Next, if you have downloaded the bits already, you might need to re-download them. To know for sure, check if you have both clients in the client folder of the AtHomeWebRole project, as pictured up above. Specifically, you want to see the FAH6.34-win32-SMP.exe. If you don’t have both executables as shown in the green box above, re-download the solution from the get started page. Within the project properties, you can now configure the app to use whichever VM size is most appropriate for you. The larger VM will run faster and accumulate more points, but will get shut down quicker. If you aren’t on a trial or don’t have the spending cap in place, monitor your usage carefully and be sure you’re staying within the plan limits or be willing to pay for the usage! That’s all there is to it! Make sure your storage account is configured per the instructions – if you had a deployment already, the new code will start and run automatically as it will pick up the settings from the storage account. So what kind of results can you predict? My main user, which is primarily using Azure, but also some Folding using my home 4-core i5 box has the following numbers: Looking at these stats, I’m pulling in an average of 146 points per WU (and 146 points per CPU). This is actually a tiny bit better than it should be, because my home machine folds at a much higher rate and contributes to this account. I then deployed some 8-core and a few 4-core instances with a different account and different pass-key: This account is pulling almost 4,000 points per WU! If we assume they were all 8-core boxes (which they weren’t, so these numbers are unfavorable) then dividing that by 8 is 474 points per CPU per WU. The bottom line: CPUs working together pull significantly more points than CPUs working alone. See? I told you this was geeky stuff. In either event, the Folding@home project is a fantastic project to contribute to, and hopefully, a fun way to learn Windows Azure in the process. Finally, if you’re not using Windows Vista, 7, or Server 2008, or otherwise just want to download the package files directly, you can do that by reading the instructions here!

Thursday, April 5th, at noon, we’ll be having our second from the last @home webcast, this time focusing on debugging and diagnostics in the cloud. While a lot of what we show is in the context of our @home app… … much of what we’ll be doing is fairly general in nature, especially some of the diagnostics material we’ll be covering this week. From this week’s abstract: In this third webcast episode, we talk about debugging your application. We look at debugging locally and how the emulator works for local development, and we talk about configuring diagnostic data to capture logs and performance counters. For the especially tricky troubleshooting issues, we discuss IntelliTrace, an advanced debugging tool, to gather more information about your application—essentially building a timeline of events that can be examined to quickly find the root of a problem. We also look at remote desktop options for troubleshooting. We’ll talk with you then!

Windows Azure has a great caching service that allows applications (whether or not they are hosted in Azure) to share in-memory cache as a middle tier service. If you’ve followed the ol’ Velocity project, then you’re likely aware this was a distributed cache service you could install on Windows Server to build out a middle tier cache. This was ultimately rolled into the Windows Server AppFabric, and is (with a few exceptions) the same that is offered in Windows Azure. The problem with a traditional in-memory cache (such as, the ASP.NET Cache) is that it doesn’t scale – each instance of an application maintains their own version of a cached object. While this has a huge speed advantage, making sure data is not stale across instances is difficult. Awhile back, I wrote a series of posts on how to do this in Windows Azure, using the internal HTTP endpoints as a means of syncing cache. On the flip side, the problem with building a middle tier cache is the maintenance and hardware overhead, and it introduces another point of failure in an application. Offering the cache as a service alleviates the maintenance and scalability concerns. The Windows Azure cache offers the best of both worlds by providing the in-memory cache as a service, without the maintenance overhead. Out of the box, there are providers for both the cache and session state (the session state provider, though, requires .NET 4.0). To get started using the Windows Azure cache, we’ll configure a namespace via the Azure portal. This is done the same way as setting up a namespace for Access Control and the Service Bus: Selecting new (upper left) allows you to configure a new namespace – in this case, we’ll do it just for caching: Just like setting up a hosted service, we’ll pick a namespace (in this case, ‘evangelism’) and a location. Obviously, you’d pick a region closest to your application. We also need to select a cache size. The cache will manage its size by flushing the least used objects when under memory pressure. To make setting up the application easier, there’s a “View Client Configuration” button that creates cut and paste settings for the web.config: In the web application, you’ll need to add a reference to Microsoft.ApplicationServer.Caching.Client and Microsoft.ApplicationServer.Caching.Core. If you’re using the cache for session state, you’ll also need to reference Microsoft.Web.DistributedCache (requires .NET 4.0), and no additional changes (outside of the web.config) need to be done. For using the cache, it’s straightforward: using (DataCacheFactory dataCacheFactory = new DataCacheFactory()){ DataCache dataCache = dataCacheFactory.GetDefaultCache(); dataCache.Add("somekey", "someobject", TimeSpan.FromMinutes(10));}
If you look at some of the overloads, you’ll see that some features aren’t supported in Azure:
That’s it! Of course, the big question is: what does it cost? The pricing, at the time of this writing, is:
Standard pay-as-you-go pricing for caching
128 MB cache for $45.00/mo
256 MB cache for $55.00/mo
512 MB cache for $75.00/mo
1 GB cache for $110.00/mo
2 GB cache for $180.00/mo
4 GB cache for $325.00/mo
One additional tip: if you’re using the session state provider locally in the development emulator with multiple instances of the application, be sure to add an applicationName to the session state provider:
<sessionState mode="Custom" customProvider="AppFabricCacheSessionStoreProvider"> <providers> <add name="AppFabricCacheSessionStoreProvider" type="Microsoft.Web.DistributedCache.DistributedCacheSessionStateStoreProvider, Microsoft.Web.DistributedCache" cacheName="default" useBlobMode="true" dataCacheClientName="default" applicationName="SessionApp"/> </providers></sessionState>
The reason is because each website, when running locally in IIS, generates a separate session identifier for each site. Adding the applicationName ensures the session state is shared across all instances.
Happy Caching!

On most (or perhaps even all?) of the production servers I’ve worked on, antivirus/antimalware detection apps are often not installed for a variety of reasons – performance, risk of false positives or certain processes getting closed down unexpectedly, or the simple fact most production machines are under strict access control and deployment restrictions. Still, it’s a nice option to have, and it’s now possible to set this up easily in Windows Azure roles. Somewhat quietly, the team released a CTP of Microsoft Endpoint Protection for Windows Azure, a plug in that makes it straightforward to configure your Azure roles to automatically install and configure the Microsoft Endpoint Protection (MEP) software. The download includes the necessary APIs to make it simple to configure. Upon initial startup of the VM, the Microsoft Endpoint Protection software is installed and configured, downloading the binaries from Windows Azure storage from a datacenter of your choosing. Note: *you* don’t have store anything in Windows Azure Storage; rather, the binaries are kept at each datacenter so the download time is fast and bandwidth-free, provided you pick the datacenter your app resides in. So, to get started, I’ve downloaded and installed the MSI package from the site. Next, I’ve added the antimalware module to the ServiceDefinition file like so: <?xml version="1.0" encoding="utf-8"?><ServiceDefinition name="MEP" xmlns="http://schemas.microsoft.com/ServiceHosting /2008/10/ServiceDefinition"> <WebRole name="WebRole1" vmsize="ExtraSmall"> <Sites> <Site name="Web"> <Bindings> <Binding name="Endpoint1" endpointName="Endpoint1" /> </Bindings> </Site> </Sites> <Endpoints> <InputEndpoint name="Endpoint1" protocol="http" port="80" /> </Endpoints> <Imports> <Import moduleName="Antimalware" /> <Import moduleName="Diagnostics" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WebRole></ServiceDefinition>
Specifically, I added Antimalware to the <imports> section. The other modules are for diagnostics (not needed necessarily but useful, as you’ll see in a bit) and remote access, so we can log into the server via RDP.
Next, the ServiceConfiguration will configure a bunch of options. Each setting is spelled out in the document on the download page:
<?xml version="1.0" encoding="utf-8"?><ServiceConfiguration serviceName="MEP" xmlns="http://schemas.microsoft.com/ ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WebRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="xxx" /> <Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.ServiceLocation" value="North Central US" /> <Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.EnableAntimalware" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.EnableRealtimeProtection" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.EnableWeeklyScheduledScans" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.DayForWeeklyScheduledScans" value="1" /> <Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.TimeForWeeklyScheduledScans" value="120" /> <Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedExtensions" value="txt|log" /> <Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedPaths" value="e:\approot\custom" /> <Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedProcesses" value="d:\program files\app.exe" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="xxx" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="xxx" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2013-03-21T23:59:59.000-04:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="xxx" thumbprintAlgorithm="sha1" /> </Certificates> </Role></ServiceConfiguration>
Many of these settings are self-explanatory, but essentially, we’re setting up weekly scans at 2am on Sunday, excluding app.exe, and everything in e:\approot\custom. We’re also skipping txt and log files. Also, the MEP bits will be pulled from the North Central US datacenter. It’s not a big deal if your app is outside of North Central– it’s just that the install takes a few moments longer (the default is South Central). (And, technically, since bandwidth going into the datacenter is currently free, the bandwidth isn’t an issue.)
If we log into the box (the role must be RDP enabled to do this) we’ll see these settings reflected in MEP.
Weekly scans:
Excluding app.exe:
And skipping txt and log files:
Finally, we can also set up the Windows Azure Diagnostics agent to transfer relevant event log entries to storage – in this example, we’re just adding the antimalware entries explicitly, though getting verbose information like this is probably not desirable:
private void ConfigureDiagnosticMonitor() { DiagnosticMonitorConfiguration diagnosticMonitorConfiguration = DiagnosticMonitor.GetDefaultInitialConfiguration(); diagnosticMonitorConfiguration.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(1d); diagnosticMonitorConfiguration.Directories.BufferQuotaInMB = 100; diagnosticMonitorConfiguration.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1d); diagnosticMonitorConfiguration.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose; diagnosticMonitorConfiguration.WindowsEventLog.DataSources.Add("Application!*"); diagnosticMonitorConfiguration.WindowsEventLog.DataSources.Add("System!*"); diagnosticMonitorConfiguration.WindowsEventLog.ScheduledTransferPeriod = TimeSpan.FromMinutes(1d); //Antimalware settings: diagnosticMonitorConfiguration.WindowsEventLog.DataSources.Add( "System!*[System[Provider[@Name='Microsoft Antimalware']]]"); diagnosticMonitorConfiguration.WindowsEventLog.ScheduledTransferPeriod = System.TimeSpan.FromMinutes(1d); PerformanceCounterConfiguration performanceCounterConfiguration = new PerformanceCounterConfiguration(); performanceCounterConfiguration.CounterSpecifier = @"\Processor(_Total)\% Processor Time"; performanceCounterConfiguration.SampleRate = System.TimeSpan.FromSeconds(10d); diagnosticMonitorConfiguration.PerformanceCounters.DataSources.Add( performanceCounterConfiguration); diagnosticMonitorConfiguration.PerformanceCounters.ScheduledTransferPeriod = TimeSpan.FromMinutes(1d); DiagnosticMonitor.Start(wadConnectionString, diagnosticMonitorConfiguration); }
To filter the event logs from MEP, we can add some filtering like so (adding the Level 1, 2, and 3 to the filter so we’re skipping the verbose level 4 stuff):
diagnosticMonitorConfiguration.WindowsEventLog.DataSources .Add("System!*[System[Provider[@Name='Microsoft Antimalware'] and (Level=1 or Level=2 or Level=3)]]");
After deploying the role and waiting a few minutes, the entries are written into Azure table storage, in the WADWindowsEventLogsTable. In this case, I’m looking at them using Cloud Storage Studio (although, for diagnostics and performance counters, their Azure Diagnostics Manager product is fantastic for this kind of thing):
While not everyone needs or desires this functionality, it’s a great option to have (particularly if the system is part of a file intake or distribution system).

Tomorrow (Thursday, 3/15/2012) at noon ET or 9am PT, we have our first screencast in the @home series: an introduction to the @home distributed computing project! This is the first in a series where we’ll dive into various aspects of Windows Azure – in this first webcast, we’ll keep it 100 level, discussing the platform, how to get started, and what the project is about. From the abstract page: In this 100-level webcast, we introduce Windows Azure. We look at signing up a new account, evaluate the offers, and give you a tour of the platform and what it's all about. Throughout this workshop, we use a real-world application that uses Windows Azure compute cycles to contribute back to Stanford's Folding@home distributed computing project. We walk through the application, how it works in an Windows Azure virtual machine and makes use of Windows Azure storage, and deploying and monitoring the solution in the cloud. If you can’t make this one, be sure to check out the rest in the series by watching the @home website – we’ll be diving deeper into various features as the weeks progress, and we’ll post links to the recordings as they become available.

Two years ago, Jim O’neil and I developed a quick Azure training program called “@home with Windows Azure” – a way to learn Windows Azure and have some fun contributing to a well known distributed computing effort, Folding@home. A few months later, Peter Laudati joined the cloud team and we developed the game RockPaperAzure. RockPaperAzure was a lot of fun and is still active, but we decided to re-launch the @home with Windows Azure project because of all of the changes in the cloud since that effort in 2010. So, having said all that, welcome to our “Learn the Cloud. Make a Difference” distributed computing project! It’s been updated, as you can see on the page – a much cleaner and nicer layout, maintaining our great stats from the 2010 effort where we had a cumulative 6,200+ virtual machines having completed 188k work units! (Of course, as happy as I am with the numbers, the Folding@home project has a over 400k active CPUs with over 8 petaFLOPS of processing power! Stanford University’s Pande Lab has been sponsoring Folding@home for nearly 12 years, during which they’ve used the results of their protein folding simulations (running on thousands of machines worldwide) to provide insight into the causes of diseases such as Alzheimer’s, Mad Cow disease, ALS, and some cancer-related syndromes. When you participate in @home with Windows Azure, you’ll leverage a free, 3-month Windows Azure Trial (or your MSDN benefits) to deploy Stanford’s Folding@home application to Windows Azure, where it will execute the protein folding simulations in the cloud, thus contributing to the research effort. Additionally, Microsoft is donating $10 (up to a maximum of $5000) to Stanford’s Pande Lab for everyone that participates. We’ve provided a lot of information to get you started, including four short screencasts that will lead you through the process of getting an Azure account, downloading the @home with Windows Azure software, and deploying it to the cloud. And we won’t stop there! We have a series of webcasts also planned to go into more detail about the application and other aspects of Windows Azure that we leveraged to make this effort possible. Here is the schedule for webcasts, and of course, you can jump in before at any time: 3/15/2012 12pm EDT @home with Azure Overview 3/22/2012 12pm EDT Windows Azure Roles 3/29/2012 12pm EDT Azure Storage Options 4/05/2012 12pm EDT Debugging in the Cloud 4/12/2012 12pm EDT Async Cloud Patterns

SQL Azure just got some better pricing! Here are the details: Database Size Price Per Database Per Month 0 to 100 MB Flat $4.995 Greater than 100 MB to 1 GB Flat $9.99 Greater than 1 GB to 10 GB $9.99 for first GB, $3.996 for each additional GB Greater than 10 GB to 50 GB $45.954 for first 10 GB, $1.998 for each additional GB Great than 50 GB to 150 GB $125.874 for first 50 GB, $0.999 for each additional GB Notice the new 0 to 100 MB tier – finally, a good option for small databases, utility databases, blogs, etc. Note, however, that when setting up a database, there is a maxsize property – currently, the maxsize can be set to 1 GB, 5 GB, 10 GB, and then in 10 GB increments up to 150 GB. (The 1 GB and 5 GB belong to the Web Edition, and the larger are part of the Business Edition. Both offer the same availability/scalability.) So, if a database is set to maxsize of 1 GB, as long as the size stays at or below 100 MB, the reduced pricing will be in effect. The price is calculated daily based on the peak size of the database for that day, and amortized over the month. This is a breakdown of the changes from the previous pricing model: GB Previous Pricing New Pricing New Price/GB Total % Decrease 5 $49.95 $25.99 $5.20 48% 10 $99.99 $45.99 $4.60 54% 25 $299.97 $75.99 $3.04 75% 50 $499.95* $125.99 $2.52 75% 100 $499.95 * $175.99 $1.76 65% 150 $499.95* $225.99 $1.51 55% *Previous prices 50GB and larger reflect price cap of $499.95 announced December 12, 2011. For more information, check out the Accounts and Billing in SQL Azure page. Also, my colleague Peter Laudati has a nice write upon the changes!

Windows Azure has been capable of running multiple websites in a single web role for some time now, but I found myself recently in a situation with 2 separate Azure solutions and was looking to combine them to create a single deployment. Just like in IIS, this is most often done via host headers, so requests coming in can be forwarded to the correct site. The Goal The fine folks at infragistics created a really cool Silverlight-based reporting dashboard for Worldmaps. Until now, each was running as its own Azure hosted service: Options to consolidate included folding the code into the Worldmaps site, which would involve actual work, or converting the site to use IIS instead of the hostable web core (HWC), which was, originally, the only way to host Azure deployments prior to version 1.3 of the SDK. Under IIS, host headers can be used to direct traffic to desired correct site. Preconsiderations Inside the ServiceDefinition file, the <sites> section is used to define the websites and virtual directories, like so: <Sites> <Site name="Web" physicalDirectory="..\WorldmapsSite"> <Bindings> <Binding name="HttpIn" endpointName="HttpIn" /> </Bindings> </Site> <Site name="Reporting" physicalDirectory="..\..\igWorldmaps\WorldmapsDemo.Web"> <Bindings> <Binding name="HttpIn" endpointName="HttpIn" hostHeader="reporting.myworldmaps.net" /> </Bindings> </Site></Sites>
Nothing too crazy in there, but I’ll talk about the paths later.
The first problem is that I was using webrole.cs file in the Worldmaps application, overriding the Run method to do some background work:
public class WebRole : RoleEntryPoint{ public override void Run() { // I'm doing stuff! } }
The Run method is called from a different thread, and it did a lot of background processing for the site (logging data, drawing maps, etc.). This is a great technique, by the way, to add “workers” to your website. This is, by itself, not a problem to do under IIS or HWC, except, under the HWC version, the thread runs in the same process. I could write to an in-memory queue via the website, and process that queue in the webrole.cs without problem, provided the usual thread safety rules are obeyed. Likewise, the worker could read/write to an in memory cache used by the website. Under IIS, though, the site and role are in a separate process, so it wasn’t possible to do this without re-engineering things a bit. You don’t need to worry about this if you aren’t doing anything “shared” in your webrole.cs file.
Add the Project
In my existing Worldmaps solution, I added the infragistics “WorldmapsRporting” project by adding the project to the solution (right click the solution, and choose Add Existing Project):
Hook it Up
The <sites> tag (seen above) is pretty self-explanatory as it defines each site in the deployment. For the first and main site, I didn’t provide a host header because I want it respond to pretty much anything (www, etc.). For the second site, I give it the reporting.myworldmaps.net host header.
Here’s the tricky part, which in retrospect seems so straightforward. The physicalDirectory path is the path to the web project, relative to the Cloud project’s directory. When I first created the Worldmaps solution (WorldmapsCloudApp4 is when I converted it .NET 4), I had the cloud project, the website itself, and a library project in the same directory, like so, with the cloud project highlighted:
So, the path the WorldmapsSite is up one level. To get to the infragistics website, it’s up to levels, the into the igWorldmaps folder and into the WorldmapsDemo.Web folder. We can ignore the other folders.
DNS
The project in Windows Azure is hosted at myworldmaps.cloudapp.net, as seen from the Azure dashboard:
…but I own the myworldmaps.net domain. In my DNS, I add the CNAMEs for both www and reporting, both pointing to the Azure myworldmaps.cloudapp.net URL (picture from my DNS dashboard, which will vary depending on who your DNS provider is):
Testing Locally
To test host headers on a local machine, you’d need to add the DNS names to your hosts file (C:\Windows\System32\drivers\etc\hosts) , like so:
127.0.0.1 myworldmaps.net
127.0.0.1 www.myworldmaps.net
127.0.0.1 reporting.myworldmaps.net
Overall, a fairly straightforward and easy way to add existing websites to a single deployment. It can save money and/or increase reliability by running multiple instances of the deployment.
Links
http://www.myworldmaps.net
http://reporting.myworldmaps.net

Week #3 of the Rock Paper Azure Challenge ended at 6pm EST on 12/9/2011. That means another five contestants just won $50 Best Buy gift cards! Congratulations to the following players for having the Top 5 bots for Week #3: AmpaT choi Protist RockMeister porterhouse Just a reminder to folks in the contest, be sure to catch Scott Guthrie, Dave Campbell, and Mark Russinovich live online next Tuesday, 12/13/2011, for the Learn Windows Azure event! Does your bot have what it takes to win? There is one more week to try and find out, now through December 16th, 2011. Visit the Rock Paper Azure Challenge site to learn more about the contest and get started. Remember, there are two ways to win: Sweepstakes To enter the sweepstakes all you have to do is enter a bot, any bot – even the pre-coded ones we provide – into the game between now and 6 p.m. ET on Dec. 16th. No ninja coding skills need – heck, you don’t even need Visual Studio or a Windows machine to participate! At 6 pm ET on Friday, December 16, 2011 the "Fall Sweepstakes" round will be closed and no new entries will be accepted. Shortly thereafter, four bots will be drawn at random for the Grand Prize (trip to Cancun, Mexico), First Prize (Acer Aspire S3 laptop), Second Prize (Windows Phone), and Third Prize (XBox w/Kinect bundle). Competition For the type-A folks, we’re keen on making this a competitive effort as well, so each week - beginning Nov. 25th and ending Dec. 16th - the top FIVE bots on the leaderboard will win a $50 Best Buy Gift card. If your bot is good enough to be in the top five on successive weeks, you’ll take home a gift card each of those weeks too. Of course, since you’ve entered a bot, you’re automatically in the sweepstakes as well! Note: As with past iterations of the challenge, even though you can iterate and upload updated bots for the competition, you will only be entered into the sweepstakes one time. You know what they say… you gotta be in it to win it! Good luck to all players in week #4!

Jim, Peter, and I are gearing up for another road trip to spread the goodness that is Windows Azure! The Windows Azure DevCamp series launched recently with a two-day event in Silicon Valley, and we’re jumping on the bandwagon for the East Region. We have five stops planned in December, and we’re doing things a bit differently this go-round. Most of the events will begin at 2 p.m. and end at 9 p.m. – with dinner in between of course. The first part will be a traditional presentation format and then we’re bringing back RockPaperAzure for some “hands-on” time during the second half of the event. We’re hoping you can join us the whole time, but if classes or your work schedule get in the way, definitely stop by for the evening hackathon (or vice versa). By the way it wouldn’t be RockPaperAzure without some loot to give away, so stay “Kinected” to our blogs for further details on what’s at stake! Here’s the event schedule, be sure to register quickly as some venues are very constrained on space. You’ll want to have your very own account to participate, so no time like the present to sign up for the Trial Offer, which will give you plenty of FREE usage of Windows Azure services for the event as well as beyond. Registration Link Date Time NCSU, Raleigh NC Mon, Dec. 5th, 2011 2 – 9 p.m. Microsoft, Farmington CT Wed., Dec. 7th, 2011 2 – 9 p.m. Microsoft, New York City Thur., Dec. 8th, 2011 9 a.m. – 5 p.m. Microsoft, Malvern PA Mon., Dec. 12th, 2011 2 – 9 p.m. Microsoft, Chevy Chase MD Wed., Dec. 14th, 2011 2 – 9 p.m.