I wanted to document this error here because I recently went nearly blind trying to solve it, and I never did find the exact cause when Bing'ing for it. Unfortunately it's a pretty generic error so it may very well have been out there, but it seems there are several reasons for it. In my case I created a new instance of a class that I was adding to my Azure table but it kept failing with this "One of the request inputs is out of range" error that was driving me absolutely nuts. Fortunately some bright individual pointed out to me that my class contained a DateTime property, and I wasn't initializing it. Apparently the default DateTime.MinValue in .NET is outside the bounds of what Azure table storage supports. So I just put in a dummy date in my property and - voila! - everything started working again.

So just a heads up in case you see this error yourself - it seems like a pretty easy one to miss.

••• Adrian Bridgewater asserted “Mentor program champions collaboration/community development over specific tools/development processes” in an introduction to his Outercurve Decrees More Agnostic Blessings post of 12/23/2011 to his Open Source column for Dr. Dobbs:

The Outercurve Foundation has announced the acceptance of the OData Library for Objective C into its endorsed rank of open-source-centric technology. The OData Library for Objective C supports cross-platform interoperability of tools essential for mobile developers and is similar to libraries available for .NET, Java, and other languages.

The platform-, technology-, and license-agnostic spokespeople at Outercurve confirm that developers can use the library to connect to OData feeds by leveraging code that makes it easier to build queries and parse AtomPub and JSON responses from the server. Accepting the project into Outercurve will, says the foundation, support interoperability and make it simpler for iOS developers to adopt and contribute to the OData Library.

In line with its latest agnostically filtered judgments, the foundation is also expected to announce details of its new Mentor Program, which appears to have been designed to deliver more project-specific relevance. While Outercurve has already had active mentorship activities in motion for project and gallery managers, the new program is still only described in generic terms as providing more support to create successful projects, develop and transmit an Outercurve culture, and develop an environment of leadership development.

According to the organization's publicity arm, "The mentor program will introduce gallery and project managers to the 'Outercurve Way', which emphasizes collaboration/community development over specific tools/development process; training and mentoring over incubation, and development guidance/best practices over fixed development methodologies."

Update 12/21/2011: Added an embedded Excel Web Services worksheet from my SkyDrive account (see end of post). The worksheet was created from ContentItems.csv with a graph having a linear (rather than logarithmic) abcissa by the process described in Microsoft’s Excel Mashups site.

In this post I will show you the way to consume JSON WCF REST Service in Windows Phone 7. This post is divided into four parts.

Creating simple WCF REST Service for ADD

Consuming JSON REST Service in Windows Phone 7 application.

Creating complex WCF REST Service with custom classes and types.

Consuming complex JSON REST Service in Windows Phone 7 application.

Creating simple Add Service

To start with let us create a WCF REST Service returning JSON as below. There is one operation contact named Add in the service and that is taking two strings as input parameters and returning integer.

After configuring, Service ready for hosting. You are free to host it either on Azure, IIS, or Cassini. For local ASP.Net Server hosting press F5. Do a testing of service in browser and if you are getting expected output, you are good to consume it in the Windows Phone 7 application.

Consuming Service in Windows Phone 7

To consume REST Service in Windows Phone 7 and then parse JSON, you need to add below references in Windows Phone 7 project.

As the design of the page I have put two textbox and one button. User will input numbers to be added in the textbox and on click event of button result would get displayed in message box. Essentially on click event of the button we will make a call to the service. Design of the page is as below,

On click event of the button we need to make a call to the service as below,

And ServiceURi is constructed as below,

There is nothing much fancy about above service call. We are just downloading JSON as string using WebClient. However parsing of JSON is main focus of this post. To parse we need to write below code in completed event.

In above code

Converting downloaded string as Stream

Creating instance of DataContractJsonSerializer. In that we are passing type as string since returned type of service is string.

Reading stream into instance of DataContractJsonSerializer

Displaying the result.

Putting all code together you should have below code to make a call and parse JSON in windows phone 7

The Windows Azure Toolkit for Windows Phone is helping a lot to integrate all kind of Azure features in the device. Recently, Azure Toolkit for Windows Phone has been split in separate parts (storage, ACS, Push notification, …), so that each can be used separately in a project (via NuGet), which is a VERY good thing.

Though, it makes it easy to deal with ACS authentication and external identity providers which is what we need in our WP7 application. We want to get an authenticated user to use our app since the OData service will associate data to this user. The service will also ensure that each user will have access to his own data only.

Back to the Azure Toolkit : we are interested in the ACS part so that our WP7 application provides a log in page and an authenticated user. The NuGet package for this is here:

The How To is very explicit, though, it is very easy to handle an OAuth 2 authentication in a windows phone app.

But the procedure doesn’t explain how to set up your remote service so that it can handle the token you just received, and also doesn’t detail how to send it to the server. Actually the answer is a mix of the tutorial dedicated to the inital toolkit, and the new “How To” procedure.

Here is how to do it in a few steps…

Where we will go

Our WP7 application needs to access data from the OData service, but only an authenticated user will be allowed to. We also want to filter resulting information according to the user identity.

In your Windows Phone application

Add the NuGet package to your WP7 project

This package will install all your project needs to use ACS and store the token that will be later reinjected in the OData Http request.

The “HowTo” ([Your WP7 Project]/App_Readme/Phone.Identity.Controls.BasePage.Readme.htm) is very explicit and you just need to follow the steps to make it work. by the way, you need to have set up an ACS like explained here in Task 2.

Add the Web Browser Capability in the application manifest

The log in page will be provided from a web browser control, so you should make it available in the manifest file WMAppManifest.xml

<Capability Name="ID_CAP_WEBBROWSERCOMPONENT"/>

Handling Navigation

Add the log in page as the welcome page in your application, as explained in the How To.

If your token is valid when you start the app, you will skip the log in page and navigate directly to your application home page. So you will have to handle the Back button from there so that you exit the application. You can do it by clearing the navigation history when you navigate to your home page:

The access to the Resources is not allowed from a thread different from the IU thread, so if that may happen in your case, you should save the token in some place from a BeginInvoke so that you can reuse it safely in the SendingRequest event.

In your WCF Data Service

How do I get it back in my WCF Data Service ?

If your WCF Data Service has not been setup to handle OAuth 2, you should follow the step Task 3 – Securing an OData Service with OAuth2 and Windows Identity Foundation of the great tutorial mentioned earlier.

You will use an assembly developped by Microsoft DPE guys, that extends the WIF mechanism to handle OAuth.

At this point you should have:

Added a reference to DPE.OAuth2.dll (you should download the lab to get this one)

Added a reference to Microsoft.Identity.Model.dll

Added entries to the web.config

Checking the identity

You would probably check the identity on some actions made on your data.

You can use interceptors (QueryInterceptor or ChangeInterceptor) to do some identity validation and relative tasks, according to your business rules. In my case, I store the user identity name in each new record.

Let’s give it a try

You can put a breakpoint on the ChangeInterceptor and QueryInterceptor to check if everything is going fine and if your identity is retrieved properly on the service side. If not, try to use IIS instead of Visual Studio Development Server (thanks to Benjamin Guinebertière for that tip !)

What do I finally have ?

Each OData request made from your WP application is now including an identity token, allowed by the identity provider through ACS. You can use this identity for business rules purpose in your service, and this will be efficient wherever the request comes from. Though, you can protect your data from being accessed and updated from a simple web browser which will return a security exception.

You can still use these data from any application running on any platform that is able to send an OAuth http authorization header.

This sample illustrates how to implement federated authentication using ACS and an active Directory Federation Services (AD FS) 2.0 identity provider with a WCF relying party web service. The sample includes a WCF service and a WCF client as command line applications. The WCF service requires that clients authentication using a SAML token from ACS, which is obtained via another SAML token acquired from an AD FS 2.0 identity provider. The web service client requests a SAML token from AD FS 2.0 using Windows Authentication, and then exchanges this token for the ACS token required to access the WCF service.

To use ACS, you must first obtain a Windows Azure subscription by browsing to the Windows Azure AppFabric Management Portal and signing up. Once you have a subscription, on the Windows Azure AppFabric Management Portal, browse to the Service Bus, Access Control, and Caching section and create an Access Control Service namespace.

Platform Compatibility

ACS is compatible with virtually any modern web platform, including .NET, PHP, Python, Java, and Ruby. ACS can be accessed from applications that run on almost any operating system or platform that can perform HTTPS operations.

Configuring the Sample

The ACS configuration required for this sample can be performing using either the ACS management portal, or the ACS management service. Select one of the two options below to go to the relevant section.

Option 1: Configuring via the ACS Management Portal

Option 2: Configuring via the ACS Management Service

Note that since I am using AD FS 2.0 as the federation server, AD FS 2.0 must be installed and running.

Step 1 : Open a browser and navigate to http://windows.azure.com and sign in. From there, navigate to the Service Bus, Access Control, and Caching section to configure your ACS service namespace. Once you have created a namespace, select it and click Manage > Access Control Service at the top of the page. This should launch the following page in a new window

Step 2: Next, add your AD FS 2.0 identity provider. To do this, you will need to have your WS-Federation metadata document, which is hosted in your AD FS 2.0 server at /FederationMetadata/2007-06/FederationMetadata.xml. For example, if your AD FS 2.0 server is installed on a computer with the name abc.com, then the metadata URL will be:

https://abc.com/FederationMetadata/2007-06/FederationMetadata.xm

if the computer running AD FS 2.0 is accessible from internet and not placed behind a firewall, then you can use this URL directly. Otherwise, you will need to save this document to your computer and upload it to ACS when adding your identity provider.

Step 3: Click Identity Provider in the left panel and then click Add.

Step 4 : Select WS-Federation identity provider and click Next. Depending on the Metadata document’s location, complete the form either entering the URL or using the saved file.

Step 5 : Next, register your application with ACS by creating a relying party application. Click the Relying Party Applications link on the main page, then click Add and enter the following information in the subsequent form.

In the Name field, enter some name which you want for example “Federation Sample “

In the Identity Providers field, check only the AD FS 2.0 identity provider added in the previous step

For Token signing, select “Use a dedicated certificate”. For the certificate file, browse for the ACS2SigningCertificate.pfx file in the Certificates folder of this sample. Enter a password of “password”.

For the Token encryption certificate, browse for the WcfServiceCertificate.cer file in the Certificates folder of this sample and save the settings.

Step 6 When complete, click the Save button and then navigate back to the main page.

Step 7 : With your relying party registered, it is now time to create the rules that determine the claims that ACS will issue to your application. To do this, navigate to the main portal page and select Rule Groups. From there, select the Default Rule Group for Federation Sample RP. Click Generate and then select AD FS 2.0 in the subsequent form. Complete the form by clicking on the Generate button. This will create passthrough rules fo AD FS 2.0 based on the claim types present in the WS-Federation metadata.

Step 8:Now it is time to add the decryption certificate. This certificate is needed for ACS to decrypt incoming tokens from the AD FS 2.0 identity provider. To do this, click Certificates and keys in the left panel, and click the Add link for Token Decryption.

Step 9: In theCertificate field of the subsequent form, browse to Certificates folder of this sample and pick ACS2DecryptionCert.pfx. The password for this certificate is “password”.

I wanted to expand on a great blog post that Wayne Berry did on the subject of combining web and worker roles into a single machine instance. As Wayne mentioned, this is a great way to help save money when you do not need a high end machine for your web application and worker processes and it is more efficient to combine these into a single machine instance. By default, you get allocated one instance for the worker role and another machine for your web site. If you want an SLA, you would need 2 of each. Certainly for a new business looking to keep the costs low it would be far better if you could combine these roles into a single machine.

The interesting thing is that a Windows Azure web role is actually just a worker role with some IIS capabilities. The trick to combine these is to make a web role add a subclass called RoleEntryPoint which is what is used by the worker role. For MVC2, this is really easy and Wayne’s blog explains this really well. Unfortunately, for my MVC4 Web role, there was no Webrole.cs file. No problem, this can easily be fixed by adding a new class. To do this, right click on the MVC4 project and choose Add | Class.

I called mine WebWorker.cs

After that you simple add code that would look similar to the following and you are done.

The purpose of the Windows Azure ISV blog series is to highlight some of the accomplishments from the ISVs we’ve worked with during their Windows Azure application development and deployment. Today’s post is about Windows Azure ISV BrainCredits and how they’re using Windows Azure to deliver their online service.

BrainCredits provides a system to help people track all their informal learning on a virtual transcript, including instructor-led or self-study learning, such as webinars, classes, tutorials, books, conferences, blogs or videos. The system is designed as a highly available, high-volume web-based Model-View-Controller (MVC) application and was built following an agile process, of pushing small, incremental releases into production. To do this, the team needed an architecture that would support fast read operations and allow for very targeted updates without having to recompile or retest the entire application. They decided on a CQRS (Command Query Responsibility Segregation) style architecture. They also decided to host the application on Windows Azure to take advantage of fine-grain scaling of individual subsystems (web roles or worker roles) independently depending on traffic and background workload.

CQRS architectures essentially separate write actions from read actions. With BrainCredits, you’d have write actions, such as registering for an instructor-led class, and read actions, such as seeing your online resume. BrainCredits handles write actions by having the web role collect requests (aka commands) and routes them to the worker role asynchronously via queues. This allows the UI response time to be very fast and also reduces the load on the web role. In this case, BrainCredits was able to deploy Small instances for their Web Roles, with each instance consuming a single core.

The basic architecture is below:

To achieve asynchronous communication between web and worker roles, the following Windows Azure objects were used:

Windows Azure queues. The web role instance drop messages in a queue, alerting the worker role instances that a command needs to be handled.

Windows Azure blobs. Blobs store serialized commands, and each command queue message points to a specific blob. Note: a blob is used to store the command because a BrainCredits user can add free-form text to some of the commands being issued, resulting in unpredictable message sizes that occasionally exceed 8k, the then-current queue message size limit. With the new 64K message-size limit announced in August 2011, this indirection is likely unneeded.

Windows Azure table storage. Event sourcing data is captured in Windows Azure Tables. As events are raised by the domain, the events are stored in table storage and used by the CQRS framework to re-initialize a domain object during subsequent requests. Windows Azure Table Storage is also used for storing command data such as user UI clicks (which are unrelated to domain events). The Domain Event table allows BrainCredits system administrators to recreate all of the UI steps that a user took during their visits to the site (e.g. search performed, page loaded, etc.)

Windows Azure Cache. The cache is used for storing data between requests, to provide the user some feedback on commands being executed but have not completed yet. This allows BrainCredits to handle eventual consistency in the application so as to provide the user the appearance of a synchronous experience in an asynchronous application.

One point about VM size: A Small VM instance provides approx. 100Mbps bandwidth. If BrainCredits found a performance bottleneck due to reading and writing command and event content that impacted total round-trip processing time for a large command or event, a larger VM size would have been a viable solution. However, based on testing, a Small instance provided very acceptable customer-facing performance. By keeping the VM size at Small (e.g. 1 core), the “idle-time” cost is kept to a minimum (e.g. 2 Small Web Role instances + 1 Small Worker Role instance equates to approx. $270 as a baseline monthly compute cost). Medium VMs would increase this minimum cost to about $540. It’s much more economical to scale out to multiple VM instances as needed, then scale back during less-busy time periods.

There are a few key points illustrated by the BrainCredits architecture:

Focus on End-user experience. By following a CQRS approach and handling updates asynchronously, Web response time is unaffected by long-running background processes.

Scalability. BrainCredits separated all background processing into a Worker Role. While it’s entirely possible to process queues asynchronously in a Web Role thread, this would impact scale-out options. With a separate Worker Role, the system may be fine-tuned to handle user-traffic and background-processing load independently.

Cost footprint. VM size selection is important when considering minimum run-rate. While it’s tempting to go for larger VMs, it’s often more cost-effective to choose the smallest VM size, based on memory, CPU, and network bandwidth needs. For specific details about VM sizes, see this MSDN article.

Remember that there are often several ways to solve a particular problem. Feel free to incorporate these solution patterns into your own application, improve upon it, or take a completely different approach. Feel free to share your comments and suggestions here as well!

Stay tuned for the next post in the Windows Azure ISV Blog Series; next time we’ll share Digital Folio’s experience with Windows Azure.

Whitewater automates enterprise backup to the cloud. The product from Riverbed is a gateway that combines data de-duplication, network optimization, data encryption, and integration with Windows Azure storage services through a single, virtual, or physical appliance.

No longer do enterprises need to use tape and move the files from disk to disk. Nor does an enterprise need to pay for a traditional disaster recovery site.

Whitewater cloud storage gateways and public cloud storage provides secure, off-site storage with restore anywhere capability for disaster recovery (DR). Data can be recovered by Whitewater from any location. Public cloud storage offers excellent DR capabilities without the high costs associated with remote DR sites or colocation service providers. With the public cloud and Whitewater cloud storage gateways – any size organization can significantly reduce its business risk from unplanned outages without the large capital investments and running costs required by competing solutions.

The Problem

How do you eliminate dependence on unreliable, error-prone tape systems for backup and DR Eliminate tape?

Smaller offices who may have few IT Pros are often focused on other priorities. Backups can be lower on the priority list for smaller businesses.

How can offsite vaulting expenses be reduced while insuring high speed for recovery?

The Solution

Whitewater is either a gateway appliance (with dedicated hardware and software) or software on an existing server.

First, IT Pros walk through a few steps to configure the backup software and Whitewater. Use your own backup software and point it to Whitewater as the backup target. Whitewater doesn’t even need to replace the existing backup protocol; it just points to Windows Azure storage as a secure location for your data.

Then Whitewater does de-duplication, compression, encryption before data is moved [to] Azure blob storage. Duplicate copies of your data are made and copied Windows Azure. Windows Azure also copies the date to a data center in the same international region.

Whitewater cloud storage gateways are available in four models, as either virtual or physical appliances, to meet a wide range of needs.

Architecture

The customer back up software pulls data from file servers, application servers, and email servers, just as you would expect. The backup software points to Whitewater software or appliance which optimizes, deduplicates the data.

It then sends the data to Windows Azure for storage.

Whitewater requires the security keys that are supplied to the Whitewater device. The customer holds the keys to the data maintained by Whitewater and the keys to the data stored in Azure. For Whitewater to not e ba single point of failure, you can configure another Whitewater device with the same key to execute the restore.

Whitewater uses Server Message Block (SMB), also known as Common Internet File System (CIFS) protocol to Network Attached Storage (NAS) target. Windows-powered NAS includes advanced availability features such as point-in-time data copies, replication, and server clustering.

Backups are made to the Whitewater appliance, which optimizes, deduplicates

The customer’s Azure account sets aside one specific type of account or directory each Whitewater.

Real-time replication. Teal-time replication transmits data quickly so that all backup data is safely stored in the cloud as quickly as possible and ensures synchronization between the cloud and locally cached data set.

Strong 256-bit encryption and secure transmission. Whitewater secures data in-flight using SSL v3, and at rest using 256-bit AES encryption. Whitewater leverages an innovative key management system that allows organizations to carefully manage data security and offers the flexibility to restore data to any location. Encryption keys are kept safe by the IT administrator.

Local optimized disk cache. Whitewater keeps the most recent, and most-often-accessed data in deduplicated and compressed format within the cache to greatly increase the amount of data stored locally to speed recoveries. If data needing restoration is no longer completely in its local cache, the Whitewater gateway will supplement in-cache data with data from cloud storage.

Drop-in installation. Whitewater gateways install quickly and easily, requiring only a simple change of the backup target in the backup application. As Whitewater gateways appear to the backup application as a CIFS share or NFS export, the change required is minimal.

About Riverbed

Riverbed delivers performance for the globally connected enterprise. With Riverbed, enterprises can successfully and intelligently implement strategic initiatives such as virtualization, consolidation, cloud computing, and disaster recovery without fear of compromising performance. By giving enterprises the platform they need to understand, optimize and consolidate their IT, Riverbed helps enterprises to build a fast, fluid and dynamic IT architecture that aligns with the business needs of the organization. Additional information about Riverbed (NASDAQ: RVBD) is available at www.riverbed.com.

Targeted Audience

This specification is intended for Architects, Project Managers, Software Design Engineers and Developer.

SSL Certificate

SSL uses an encryption mechanism to transport the data from Client-Server. The encryption using a private key/public key pair ensures that the data can be encrypted by one key but can only be decrypted by the other key pair

The trick in a key pair is to keep one key secret (the private key) and to distribute the other key (the public key) to everybody (client). A client side public key is stored in certificate.

A certificate contains information about the owner, like e-mail address, owner’s name, certificate usage, duration of validity and the certificate ID of the person who certifies (signs) this information.

Certificate contains also the public key and finally a hash to ensure that the certificate has not been tampered with. Usually your browser or application has already loaded the root certificate of well known Certification Authorities (CA) or root CA Certificates. The CA maintains a list of all signed certificates as well as a list of revoked certificates.

Note: Developer can also generate Self Signed certificate for development and testing purpose.

Step 2: Use the following command from command prompt. Before running the command don’t forget to replace www.yourserver.com with service URL. It is mandatory to have CN value same as service URL to expose the service on HTTPs

Click on “Certificates” Tab, -> Click on “Add Certificates”. This step will display the below screen

Click on button at “Thumbprint” column to brows the certificate from certificate store, select your certificate and click on Ok button.

Now switch to “Endpoints” tab and make the below highlighted configuration and select SSL Certificate from dropdown. It is the same certificate created in previous step.

Now switch to “Configuration” tab and make the below highlighted configuration

Build the project and launch the service, It should be launched on HTTPs protocol.

Web.Config Configuration

Apart from above changes, few more changes to be made on service Web.config file to complete the configuration.
i. Add the following property to ServiceMetaData Tag as highlighted <serviceMetadata httpsGetEnabled=”true” httpGetEnabled=”true”/>

Build the project and launch the service, It should be launched on HTTPs protocol. Package the Service and Upload to Windows Azure with following

Service Package

Service Configuration

Certificate Private Key

Configure Client on Secure Channel (HTTPs)

Once the Service reference is added of above service following code change is required at client-
Client Credential need to be passed on Proxy.
Sample code: AzureZoneContractClient proxy = new AzureZoneContractClient();
if (proxy.ClientCredentials.UserName.UserName == null)
{
proxy.ClientCredentials.UserName.UserName = CommonFunction.UserID;
}

Windows and Web Client:

During the development and testing self sign certificate are used, which is not recognized by standard CA and application throws error. To suppress the error following modification to made to client side code.

Note: This steps is not required, If Service is configured with valid SSL certificate.

Prologue:

In this article we shall discuss about that theme extension for LightSwitch application. As the continuation of my previous article titled “How to create shell extension for lightswitch application, today we will create a demo theme extension.

What is Theme?

A theme is a collection of property settings that allow you to define the look of screens and controls, and then apply the look consistently across screen in a LightSwitch application, across an entire LightSwitch application, or across all LightSwitch applications on a server.
For lightswitch application, we can create a theme as extension. In this post we will create a sample theme extension.

Preparing the Solution:

Let us start VS 2010 or VS LightSwitch 2011 [if you have installed]. Create a sample theme extension as shown in the figure.
Follow the numberings as showing in the above figure. Name the extension as “HowToCreateCustomThemeLightSwitch”. Once you created the extension application, we need to add the theme file.
As shown in the figure,

Select the “Custom Theme.Lspkg

Then select Add –> New Item to add the new theme file.

If you click on the “New Item” option, you will be seeing a window which lists all the extensions available. Select the Theme extension type as shown in the above figure. After adding the Theme file, two theme files will be generated in the “CustomTheme.Client” as show in the below figure.

NewTheme.cs – This file contains the property for the newly created theme file.

NewTheme.xaml – The XAML file a Resource Dictionary which has the theme definitions like font color,size and screen background colors etc.

The below example xaml code shows the theme definition.

<!-- Control: Screen -->

<!-- States: NormalBrush -->

<!-- ScreenBackground - The background of the screen. -->

<SolidColorBrush x:Key="ScreenBackgroundBrush" Color="Pink"/>

<!-- ScreenLoadingBackground - The background of a control that is loading -->

<!-- ScreenControlBorder - The border of a control that is not based on another control already defined -->

<SolidColorBrush x:Key="ScreenControlBorderBrush" Color="Black"/>

In this sample code, we have define the Screen’s background color as “Pink”

Theme Meta data:

Meta data about the newly created theme will be in the NewTheme.lsml file. Open the .lsml file as shown in the figure.
This meta data file contains the information about the theme created.

<?xml version="1.0" encoding="utf-8" ?>

<ModelFragment

xmlns="http://schemas.microsoft.com/LightSwitch/2010/xaml/model"

xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

<Theme Name="NewTheme">

<Theme.Attributes>

<DisplayName Value="Custom Theme"/>

<Description Value="This is new custom theme"/>

</Theme.Attributes>

</Theme>

</ModelFragment>

Building and Install the theme extension:

So we are created a theme in which the screen background we defined is Pink. Build the demo extension application. it will create the a .VSIX file in the BIN folder as shown in the figure.
Press Ctrl and click on the link as highlighted in yellow i.e. the .VSIX file path. It will ask you to Open or Save the file. Just click on Open.
If you click on Open, then it will open a install wizard as shown in the above figure. Click on Install to install our sample theme extension.

After installing the sample theme we need to use the theme. Just open a lightswitch application and go to the Project Properties page.
From the Project Properties page go to Extensions tab and select the theme extension which we have created as show in the above figure..
After selecting the sample theme then we need to set the theme for our application as show in the figure.

Application in Action:

Prologue:

In this article, we shall discuss about how to access controls of one screen from another screen. There should be some situation where we need to add or remove some controls depending on the option user has selected.

Let us discuss with an example to add or remove controls of one screen from another screen.

Preparing the Solution:

Fire up the Visual Studio LightSwitch 2011; create a project as shown in the below figure.

Follow the # number sequence as shown in the figure to create the project.

Designing the Entities:

Designing the Screens:

For this demo application, we need to create two screens. The first screen is used to have a button on the command bar. when you click on the button, a new button will be created on the another screen.

In case you haven’t tried it, Extensions Made Easy includes a really easy way to add commands to the button ribbon. By default, the code you want executed will run on the dispatcher of the currently active screen, if there is one, or on the main dispatcher if there is no currently active screen.

However, the former sometimes poses a problem. For certain pieces of code, you want to be on the main thread because you are going to access global resources for example. For this reason, you might sometimes want to force your command’s code to run on the main thread, and this can now be done since Extensions Made Easy v1.6, thanks to an overloaded constructor in the ExtensionsMadeEasy.ClientAPI.Commands.EasyCommandExporter…

This piece of code will execute on the dispatcher of the currently active screen:

But as Bala indicated, sometimes a command should be visible, but disabled based on the situation. Extensions Made Easy v1.8 just got pushed to the gallery, including a new property on the ExtensionsMadeEasy.ClientAPI.Commands.EasyCommandExporter called IsEnabled.

If you stop and think about it, an automobile is an amazing thing. The automobile brings together several key elements such as cost, fuel, rubber tires. The result is a massive increase in productivity for the world.

The key is productivity. This is what fuels growth.

Let’s take a look at the advancements HTML 5 provides. Yes, there are new tags and new functionality, but where is any increased productivity? Don’t get me wrong, HTML 5 is a great technology, but, I have seen too many great technologies fail to reach critical mass because they did not increase productivity. Did you ever turn all your HTML 4 document types to Strict, or did you leave them as Transitional? Are you using .css for all your styling? Is ‘lack of time’ your reason?

I love the Model-View-View Model pattern (MVVM), and LightSwitch uses it under the covers, but it never caught on with most of the developers I know, because for them, it actually goes in the opposite direction. It takes more code and time for them to use the MVVM pattern, than it would to not use it.

Yes it was more testable, but guess what, most of the developers I know don’t care. They see Test Driven Development (TDD) as a productivity sucking exercise. I am not saying that they are correct, I am just saying that this is how they feel.

People act based on ‘motivations’ and increased productivity translates into ‘more for less work’, this translates into ‘more money’… and don’t forget, it is always about the money.

90% + Productivity With Visual Studio LightSwitch

Visual Studio LightSwitch is the result of a convergence of a number if technologies (Linq, Silverlight, MVVM, WCF, Entity Framework) wrapped into a model centric tool that is built inside Visual Studio. Like a combustion engine sitting on rubber tires, it is a complete package that works. It achieves the number one thing required for success, increased productivity.

I start a new discussion because for some strange reason I cannot reply to a discussion I started.
My question has to do with composition. When I was looking forward to read your code (which I did)
I was hoping to see how one can use lightswitch and composition with his own contracts/interfaces (like IThemeExporter).
I have tried using my own contracts with Import/Export attributes but somehow composition seems to ignore my contracts.
I have ended up believing that somehow you have to "register" your contracts with MEF to achieve this, and I was expecting to see this in your solution, but I don't see anything like that. If you have any light to shed...

At the time, I couldn’t figure out what he meant so I opened a forum thread asking for more explanation (the Q&A section of an extension is really for 1 Q, several A’s, and isn’t really fit for a discussion…), but that effort turned out to be vain, and I never understood what he wanted to achieve until about 7 minutes ago, when I pushed ExtensionsMadeEasy v1.7 to the gallery…

O yea…

Extensions Made Easy v1.7 is out!

Will blog the changes (they are really minor) after this post…

Wait…

This is about to become the most confusing blog post ever, but…

Extensions Made Easy v1.8 is out!

Had a really good idea after writing “v1.7 is out”, so went ahead and implemented it, published the new version, and now continuing my blogging streak of tonight…

Now where was I… Oh yea… This fellow named Kostas asked me something about LightSwitch and MEF, which didn’t make any sense to me until about 3 hours and 7 minutes ago. I pushed a new version of ExtensionsMadeEasy to the gallery, which contains a bugfix that was long overdue regarding some MEF imports not being found…

I'm often taken aback by businesses that are unaware of the influence of cloud computing when it's about to hit them upside the head.

We saw this before, back in the early days of the Web. Some businesses got it and thrived. Others did not, and they had to play catch-up or shut their doors. Indeed, a great business skill is to understand when technology will require you to move in different directions, and cloud computing is another instance of that shift.

What are they missing? I have a few items for the list.

Reduction in IT overhead creates a price advantage. How can your competition sell its product at the price it does and still make money? Well, instead of putting $50 million a year into IT, the company has cut its costs in half through the use of cloud-based services. Or perhaps it avoided an investment in that new data center. Instead, thanks to the use of the cloud, your competitor passed that savings on as lower prices, which increased sales and led to higher profits.

Better use of business data. The cloud provides the ability to deal with massive amounts of data, a traditionally cost-prohibitive endeavor. Many businesses are taking advantage of the access to information to better determine trends and opportunities they were once blind to. They get better and make smarter allocation of resources by using those newfangled, cloud-based big data systems -- and the ability they provide to turn this new access to business intelligence into profit. If that's your competition, watch out.

Expansion through new IT agility. Businesses looking to expand through acquisition are often blocked by the years it can take to convert existing IT systems. The cloud provides much better agility, including the ability to quickly bring on board infrastructure, applications, and business data. By leveraging this newfound agility, many businesses will find that they can expand much faster -- at less cost and with less risk.

The use of cloud computing becomes a competitive advantage. You need to make sure you see this technology coming, or it could quickly run you over if your competition gets it first.

The Microsoft program requires a minimum level of redundancy and fault tolerance across the servers, storage, and networking for both the management and production virtual machine (VM) clusters. These requirements help to ensure a certain level of fault tolerance while managing private cloud pooled resources.

This IBM Redpaper™ publication explains how to set up and configure the IBM 8-Node Microsoft Private Cloud Fast Track solution used in the actual Microsoft program validation. The solution design consists of Microsoft Windows 2008 R2 Hyper-V clusters powered by IBM System x3650 M3 servers with IBM XIV® Storage System connected to IBM converged and Ethernet networks. This paper includes a short summary of the Reference Configuration software and hardware components, followed by best practice implementation guidelines.

This paper targets mid-to-large sized organizations that consist of IT engineers who are familiar with the hardware and software that make up the IBM Cloud Reference Architecture. It also benefits the technical sales teams for IBM System x® and XIV and their customers who are evaluating or pursuing Hyper-V virtualization solutions.

After you have created the required connections to both your private and public clouds, and set up the libraries to serve your clouds with resources, you should easily be able to deploy new services in both clouds using App Controller.

From your Public Cloud Library:

1. First, copy the packages over to your public cloud library

2. Right click your package and click ‘Deploy’

3. Give your service a name and a public url, and specify the preferred region (you can also specify an affinity group if you have set up this)

4. Select this hosted service for deployment

5. Name this deployment and eventually specify the operating system version (Azure OS), and select stage or production. If this service is ready to you and you want it to be available immediate, select production

6. You can eventually specify the roles and instances as well. In this example, 1 instance is enough

After the job is done, you`ll find your service up and running in the Services tab in App Controller.

This course focuses on how using technologies and tools from Microsoft can help your business build, deploy, and maintain a private cloud.

The course covers the core Windows Server products, and how to use them to build and support the virtualized and physical resources that are part of your private cloud infrastructure. You will be exposed to common cloud computing configuration and management practices, as well as technical details to help you be successful in building a private cloud for your business.

Lastly, you will learn how using tools and technologies from Microsoft as part of your private cloud will benefit both your organization and you as the IT professional.

After completing this course, try out what you’ve learnt by downloading Windows Server 2008 R2 and System Center from the TechNet Evaluation Center.

The London Windows Azure Users Group is a new user group founded by Andy Cross, Richard Conway and Hancock and Parsons. The group is dedicated to building a sustainable community of Azure users that want to share experiences and code! Amongst other things the group puts on meetings the first Tuesday evening of every month with one or two speakers on Windows Azure and related topics. Feel free to come and rub shoulders with other Azure users and put your questions and programming problems to seasoned programmers. Meetings, beer and pizza are provided. Register via the website @ http://www.lwaug.net.

Related Links:

Starting January 23rd we will be offering six weeks of FREE assistance to help UK companies explore and adopt the Windows Azure Platform. Check out the details and how to sign up at www.sixweeksofazure.co.uk.

Before Amazon introduced the Elastic Load Balancing (ELB) service, the only way to do load balancing in EC2 was to use one of the software-based solutions such as HAProxy or Pound.

Having just one EC2 instance running a software-based load balancer would obviously be a single point of failure, so a popular technique was to do DNS Round-Robin and have the domain name corresponding to your Web site point to several IP addresses via separate A records. Each IP address would be an Elastic IP associated to an EC2 instance running the load balancer software. This was still not perfect, because if one of these instances would go down, users pointed to that instance via DNS Round-Robin would still get an error until another instance would be launched.

Another issue that comes up all the time in the context of load balancing is SSL termination. Ideally you would like the load balancer to act as an SSL end-point, in order to offload the SSL computations from your Web servers, and also for easier management of the SSL certificates. HAProxy does not support SSL termination, but Pound does (note: that you can still pass SSL traffic through HAProxy by using its TCP mode, you just cannot terminate SSL traffic there.)

In short, if Elastic Load Balancing weren’t available, you could still cobble together a load balancing solution in EC2. There is no reason to ‘roll your own’ anymore however now that you can use the ELB service. Note that HAProxy is still the king of load balancers when it comes to the different algorithms you can use (and to a myriad of other features), so if you want the best of both worlds, you can have an ELB upfront, pointing to one or more EC2 instances running HAProxy, which in turn delegate traffic to your Web server farm.

Elastic Load Balancing and the DNS Root Domain

One other issue that comes up all the time is that an ELB is only available as a CNAME (this is due to the fact that Amazon needs to scale the ELB service in the background depending on the traffic that hits it, so they cannot simply provide an IP address). A CNAME is fine if you want to load balance traffic to www.yourdomain.com, since that name can be mapped to a CNAME. However, the root or apex of your DNS zone, yourdomain.com, can only be mapped to an A record, so for yourdomain.com you could not use an ELB in theory. In practice, however, there are DNS providers that allow you to specify an alias for your root domain (I know Dynect does this, and Amazon’s own Route 53 DNS service).

Elastic Load Balancing and SSL

The AWS console makes it easy to associate an SSL certificate with an ELB instance, at ELB creation time. You do need to add an SSL line to the HTTP protocol table when you create the ELB. Note that even though you terminate the SSL traffic at the ELB, you have a choice of using either unencrypted HTTP traffic or encrypted SSL traffic between the ELB and the Web servers behind it. If you want to offload the SSL processing from your Web servers, you can choose HTTP between the ELB and the Web server instances.

If however you want to associate an existing ELB instance with a different SSL certificate (say for instance you initially associated it with a self-signed SSL cert, and now you want to use a real SSL cert), you can’t do that with the AWS console anymore. You need to use command-line tools. Here’s how.

Before you install the command-line tools, a caveat: you need Java 1.6. If you use Java 1.5 you will most likely get errors such as java.lang.NoClassDefFoundError when trying to run the tools.

That's it! At this point, the SSL certificate for stage.mysite.com will be associated with the ELB instance handling HTTP and SSL traffic for stage.mysite.com. Not rocket science, but not trivial to put together all these bits of information either.

If you look closely at the services and facilities provided by AWS, you'll see that we've chosen to factor architectural components that were once considered elemental (e.g. a server) into multiple discrete parts that you can instantiate and control individually.

For example, you can create an EC2 instance and then attach EBS volumes to it on an as-needed basis. This is more dynamic and more flexible than procuring a server with a fixed amount of storage.

Today we are adding additional flexibility to EC2 instances running in the Virtual Private Cloud. First, we are teasing apart the IP addresses (and important attributes associated with them) from the EC2 instances and calling the resulting entity an ENI, or Elastic Network Interface. Second, we are giving you the ability to create additional ENIs, and to attach a second ENI to an instance (again, this is within the VPC).

Each ENI lives within a particular subnet of the VPC (and hence within a particular Availability Zone) and has the following attributes:

A very important consequence of this new model (and one took me a little while to fully understand) is that the idea of launching an EC2 instance on a particular VPC subnet is effectively obsolete. A single EC2 instance can now be attached to two ENIs, each one on a distinct subnet. The ENI (not the instance) is now associated with a subnet.

Similar to an EBS volume, ENIs have a lifetime that is independent of any particular EC2 instance. They are also truly elastic. You can create them ahead of time, and then associate one or two of them with an instance at launch time. You can also attach an ENI to an instance while it is running (we sometimes call this a "hot attach"). Unless the Delete on Termination flag is set, the ENI will remain alive and well after the instance is terminated. We'll create a ENI for you at launch time if you don't specify one, and we'll set the Delete on Terminate flag so you won't have to manage it. Net-net: You don't have to worry about this new level of flexibility until you actually need it.

You can put this new level of addressing and security flexibility to use in a number of different ways. Here are some that we've already heard about:

Management Network / Backnet - You can create a dual-homed environment for your web, application, and database servers. The instance's first ENI would be attached to a public subnet, routing 0.0.0.0/0 (all traffic) to the VPC's Internet Gateway. The instance's second ENI would be attached to a private subnet, with 0.0.0.0 routed to the VPN Gateway connected to your corporate network. You would use the private network for SSH access, management, logging, and so forth. You can apply different security groups to each ENI so that traffic port 80 is allowed through the first ENI, and traffic from the private subnet on port 22 is allowed through the second ENI.

Multi-Interface Applications - You can host load balancers, proxy servers, and NAT servers on an EC2 instance, carefully passing traffic from one subnet to the other. In this case you would clear the Source/Destination Check Flag to allow the instances to handle traffic that wasn't addressed to them. We expect vendors of networking and security products to start building AMIs that make use of two ENIs.

MAC-Based Licensing - If you are running commercial software that is tied to a particular MAC address, you can license it against the MAC address of the ENI. Later, if you need to change instances or instance types, you can launch a replacement instance with the same ENI and MAC address.

Low-Budget High Availability - Attach a ENI to an instance; if the instance dies launch another one and attach the ENI to it. Traffic flow will resume within a few seconds.

Here is a picture to show you how all of the parts -- VPC, subnets, routing tables, and ENIs fit together:

I should note that attaching two public ENIs to the same instance is not the right way to create an EC2 instance with two public IP addresses. There's no way to ensure that packets arriving via a particular ENI will leave through it without setting up some specialized routing. We are aware that a lot of people would like to have multiple IP addresses for a single EC2 instance and we plan to address this use case in 2012.

The dual Web role application has been running in Microsoft's South Central US (San Antonio) data center since September 2009. I believe it is the oldest continuously running Windows Azure application.

About Me

I'm a Windows Azure Insider, a retired Windows Azure MVP, the principal developer for OakLeaf Systems and the author of 30+ books on Microsoft software. The books have more than 1.25 million English copies in print and have been translated into 20+ languages.

Full disclosure: I make part of my livelihood by writing about Microsoft products in books and for magazines. I regularly receive free evaluation software from Microsoft and press credentials for Microsoft Tech•Ed and PDC. I'm also a member of the Microsoft Partner Network.