C++ is truly my preferred language, and I can be nearly as productive in it as I can with C# -- especially given C++/CX makes it easy to interact with the Windows Runtime. I also wanted to work directly with DirectX, and C++ is the best choice for that.

I work on the Visual C++ documentation team, so perhaps my choice was somewhat stilted. I could have written part of my app in C# and used C++ to interoperate with DirectX, and perhaps that's the right choice for some apps. But I wanted to experience what
it was like to use pure ISO C++ to implement as much of my app logic as I could, and C++/CX to bridge that code with the UI and the Windows Runtime. Remember, C++/CX does not bring in the CLR, so your app written in C++ and C++/CX is 100% native!

How did you utilize modern C++?

I made a concerted effort to use pure ISO C++ where I can and C++/CX along the "boundary" of my app where I need to communicate with the Windows Runtime. The majority of this project is new code; if I had more existing C++ code, the delineation would
be even more apparent. For instance, if I had C++ code in the form of dynamic (.dll) or static (.lib) libraries that works with the weather service, Bing Maps, and so on, I could have referenced those from this project. The focus of this project would then
be more around using the Windows Runtime and C++/CX to create the user experience, referencing my reusable, tested, code where needed.

A quick tour of C++11 and C++14 in Weathr

This project was a good opportunity to work with features from C++11 and C++14. My favorite new C++ feature is
lambda expressions. Lambda expressions are particularly important when working with asynchronous code, in particular
PPL tasks. Here’s an example from
DirectXHelper.h that uses PPL tasks to read from a binary file asynchronously. A lambda expression defines the work to perform after the file is read from disk.

Here’s another example (ForecastManager.cpp) that finds the closest
Forecast object that’s within range of the given latitude and longitude coordinates. It uses
range-based for loops to iterate over collections,
auto to have the compiler infer types, and the free-form
begin and end functions to access the container boundaries.

Memory management

Another important feature that C++11 and C++14 enable is improved support for memory management, namely through
smart pointers. A
smart pointer automatically manages object lifetime and frees you from the need to manually release memory of heap-allocated objects when they’re no longer needed – and thus helping prevent memory leaks. I used smart pointers throughout this project – you’ll
find no instances of new/delete or malloc/free (OK, with one exception –
create_delayed_task uses new/delete, but that function is one that I’ve reused and it carefully frees what it allocates.) For pure C++ code,
std::make_shared and
std::make_unique are your friends.

For DirectX and other COM-based components, I use the
Windows Runtime Template Library (WRL)
ComPtr class. Think of ComPtr as an improved version of ATL’s
CComPtr. ComPtr is a smart-pointer type that encapsulates calls to
AddRef and Release as objects are created, passed, and leave scope. With
ComPtr, there’s rarely the need to manage calls to
AddRef and Release yourself.

For C++/CX types, the runtime uses reference counting to manage object lifetime. So although you’ll see plenty of uses of
ref new, objects are automatically deleted when no other code references them.

Lifetime considerations for async programming

When writing asynchronous code there are times that may not seem obvious where you need to keep objects alive; namely, when you have a reference to memory or a data member that is owned by its ‘parent’ object. For instance, when I generate the images that appear
on the live tile (LiveTileScheduler.cpp), I use
Windows::Storage::Streams::IBuffer to access the underlying pixel data of a
Windows::UI::Xaml::Media::Imaging::RenderTargetBitmap object. In the final continuation of my task chain, I need to keep the
RenderTargetBitmap object alive so that the IBuffer that points to its contents remains valid. Otherwise, the
RenderTargetBitmap itself will be deleted and the IBuffer will point to invalid memory.

Another pattern I’ve discovered occurs when you have a chain of asynchronous tasks and one task creates an object or resource, and another, subsequent, task uses that resource. Although you can often simply create the resource in one task and pass it to its
subsequent task, this is not always possible. The issues here are that of lifetime and
indirection – whether the object has a C++ or C++/CX type, because we’re working asynchronously, the calling function that sets up the background work will soon exit, and thus any objects allocated on the stack will fall out of scope. To keep objects
shared among tasks alive, you must capture smart pointers (typically, a
shared_ptr for C++ and C++/CX objects) in all lambdas (or functors) that reference them. By using
shared_ptr, you create a level of indirection that enables the object to be allocated (e.g. written to) in one task and used (e.g. read) by a subsequent task.
Here’s an example (MainPage.xaml.cpp) that implements the command that navigates you to a random location on the globe. It uses
shared_ptr<Forecast^> so that a forecast object can be created by one background task and later used by another.

Circular references

Another caveat of smart pointers is the notion of circular references. A circular reference occurs when two objects each explicitly holds a smart pointer (or similarly managed reference) to the other. Because both objects will always have at least one
reference to it, neither object is ever released. The .NET garbage collector can deal with circular references for you, but for C++, you have to manage it directly.
In Weathr, a ForecastGroup (ForecastGroup.h) groups
Forecast (Forecast.h) objects. I need for each
Forecast object to reference its parent group (as a data member.) These are both C++/CX types, so to break cycles, I use
Platform::WeakReference (the C++ analog is
std::weak_ptr) to enable each
Forecast object to hold a weak pointer to its parent
ForecastGroup. Weak pointers enable you to test whether the pointed-to resource is still valid.

The takeaway from all this is that although smart pointers greatly reduce the amount of effort required to manage memory, you should always understand, profile and test your code to ensure you’re getting the behavior you expect.

How did you use PPL tasks to keep the app fast and responsive?

Asynchrony is the key to keeping your Windows Store apps fast and fluid. Resumable functions (think C#'s async/await for C++) are part of the
Visual C++ Compiler November 2013 CTP. But in Visual Studio 2013,
PPL tasks are the preferred way to perform CPU-intensive work and I/O operations in the background.

Here are a few things around asynchronous programming that I had to consider while writing my app.

Make sure you’re on the right thread context

Windows Store apps use a threading mechanism that’s similar to the COM threading model. To summarize this from
MSDN:

In this model, objects are hosted in different apartments, depending on how they handle their synchronization. Thread-safe objects are hosted in the multi-threaded apartment (MTA). Objects that must be accessed by a single thread, such as UI elements,
are hosted in a single-threaded apartment (STA). In an app that has a UI, the ASTA (Application STA) thread is responsible for pumping window messages and is the only thread in the process that can update the STA-hosted UI controls. This has two consequences.
First, to enable the app to remain responsive, all CPU-intensive and I/O operations should not be run on the ASTA thread. Second, results that come from background threads must be marshaled back to the ASTA to update the UI. In a C++ Windows Store app,
MainPage and other XAML pages all run on the ATSA. Therefore, task continuations that are declared on the ASTA are run there by default so you can update controls directly in the continuation body. However, if you nest a task in another task, any continuations
on that nested task run in the MTA. Therefore, you need to consider whether to explicitly specify on what context these continuations run.

OK, so you need to run CPU-intensive and I/O operations in the background, and update your controls on the main thread. When working on
Hilo, we learned early the importance of this. We found that too often we were trying to update UI controls in the background, and were running CPU-intensive work on the main thread. To prevent this, we took the following
steps to better structure the code:

In Debug builds, record the thread ID of the ASTA thread at startup. In each task continuation, assert that we’re either running on the main (ASTA) thread or a background thread.

In task continuations, specify explicitly whether to run on the current context (e.g. the context that established the task chain, which might be the ASTA thread or MTA thread) or on a background (MTA) context.

These steps increased our confidence that our code was running where we expected. It also helped us fail more quickly – we could see right away through an assertion failure that we weren’t running on the context we expected, instead of later crashing due to
a WrongThreadException (or similar failure) and needing to retrace our steps. Although the default task continuation option is to run on the current context (again, the context that set up the task chain), making the choice
explicitly forced us to consider where we need to run, and also helped make the code more readable later.

I followed this pattern in Weathr. TaskExtensions.h and TaskExtensions.cpp define the interface for recording the main thread ID and checking the currently running context. Here’s the relevant part from
TaskExtension.h.

Here’s an example from MainPage::UpdateLiveTileAsync (MainPage.xaml.cpp) that chains a number of async operations. Some parts need to run in the background, and others on the main thread. Assertions and explicit specification
of task continuation contexts help make things more maintainable.

There are also times when you're running on the MTA and need to run work on the ASTA; for example, you need to set a property on an object that's data-bound to the UI. For that, there's
run_async_non_interactive, which we developed while working on Hilo. Here's the implementation from
TaskExtensions.cpp (#includeTaskExtensions.h).

Here's an example of run_async_non_interactive in action.
ThumbnailManager::WriteThumbnailAsync (ThumbnailManager:.cpp) writes the thumbnail image for a location to disk and then sets the file path property on the given
Forecast object. Because the
ThumbnailImagePath property is data-bound to the UI, it must be set on the ASTA.

Setting an uncaught termination handler

Another practice I borrowed from Hilo was to end each task continuation chain with an
exception policy check. The PPL allows for exceptions to be preserved and handled later. If an exception goes uncaught, the runtime terminates the app. Exception policies give the app one final chance to deal with uncaught errors. For example, in your
enterprise Windows Store app, you might log unexpected failures so your IT department can later investigate them. Exception policies work by providing a
task-based continuation at the end of each task chain and retrieving the task result. Task-based continuations always run, even if one of the tasks failed or was canceled, so we have a guaranteed place to check for failure.

One benefit for exception policies is that they make the code more readable – you can see from glancing at the code where chains of tasks logically end. This is especially useful when task chains span multiple function calls. To see where I used it in Weathr
(or where we used it in Hilo) search for occurrences of
ObserveException.

Using Visual Studio to debug uncaught task exceptions

In a Windows Store app, allowing an exception in a background task to go uncaught terminates the app. Visual Studio 2013 provides enhanced functionality for debugging unhandled exceptions. This
blog article provides a nice overview of the added functionality. To summarize, when an unhandled exception occurs, you can see the stack trace of asynchronous calls that let up to the call to the unhandled exception event handler. You can also use the
$exceptionstack pseudovariable in the Watch and other windows to view the captured stack of the most recent exception on the current thread. For example, here's what the stack trace for an exception might look like. The
line numbers make it easier to track down exactly where the call was made.

HttpClient, which is new for Windows 8.1, is a fine choice for many apps, but I went with the C++ REST SDK for two reasons. My main reason was for future portability. Keeping with the spirit of using as much pure ISO C++ as
I could, the C++ REST SDK enables my code to be more portable to different platforms. My second, more personal, reason was that because my team documents the C++ REST SDK, I wanted to further explore its functionality for myself. For your project, compare
the needs of your app against the required functionality of the different libraries.

C++ REST SDK basics

The basic pattern that I use in Weathr to connect to REST services is:

In the task continuation returned by request, process the response. The response comes in the form of a
web::http::http_response object.

Here’s an example from Weathr. The BingLocatoinService::GetLocationAsync member function (BingLocationService.cpp) uses the Bing Maps Locations REST API to asynchronously query for the location at the given latitude
and longitude coordinates. The continuation chain first checks the response code, extracts the result body as JSON (using the
web::json::value class), and extracts the required fields from the JSON payload. The result comes in the form of an app-defined
Location object.

Retrieving binary data

Despite its name, the C++ REST SDK can retrieve arbitrary payloads from HTTP servers, including binary data. On the Search page, I want to display the country flag next to each result, like this:

I also wanted to cache the flag image on disk so I don’t need to download it again later.

The SearchResultsPage::DownloadFlagImageAsync member function (SearchResultsPage.cpp) uses
http_client to download a flag image from geonames.org and write it to disk. To read binary data, this function uses the
http_response::body member function to open an asynchronous stream to the response. It then uses the
concurrency::streams::producer_consumer_buffer class to acquire a direct memory pointer to the stream data (the blocking call is OK here because we’re running on a background thread.) Finally, it
writes the data to a
Platform::Array object so that the data can be passed to the
Windows::Storage::FileIO class to be written to disk.

Retrying failed HTTP requests

Retrying failed asynchronous calls is challenging. In serial code, you can simply use a loop (e.g.
while(!succeeded) {succeeded = do_something();}), but for async you don’t quite have the same control-flow mechanism to ‘latch’ on to.

Until resumable functions become a full-fledged part of the Visual C++ compiler (although you can try it out today in the
Visual C++ Compiler November 2013 CTP), I created something that which resembles functional programming, which I named
run_async_with_retry.
run_async_with_retry takes three parameters: a function that performs the task asynchronously (the
work function), a function that asynchronously determines whether to retry the operation based on the retry count (the
predicate function), and the current retry count (you typically use the default of 0 –
run_async_with_retry calls itself when the task fails and increments the retry count.) If the work function fails (throws) for any reason, it calls the predicate function. If the predicate function returns
true, it retires the operation. Otherwise, it rethrows the original exception.
Here’s the implementation (TaskExtensions.h).

// Performs the given operation, and calls the user-supplied predicate if the action fails.// The predicate takes the retry count and returns a task that produces whether to retry the operation.template<typename T>
inline concurrency::task<T> run_async_with_retry(const std::function<concurrency::task<T>()>& asyncOperation, const std::function<concurrency::task<bool>(uint32_t)>& asyncContinuePredicate, uint32_t retryCount = 0)
{
return asyncOperation().then([asyncOperation, asyncContinuePredicate, retryCount](concurrency::task<T> previousTask)
{
try
{
// Raise any exception that occurred during the task.
previousTask.get();
// No exception occurred; return the previous task.return previousTask;
}
catch (const concurrency::task_canceled&)
{
// The previous task was cancelled. Don't retry it.
concurrency::cancel_current_task();
}
catch (...)
{
// Some other unhandled exception occurred. Run the continue predicate.return asyncContinuePredicate(retryCount).then([asyncOperation, asyncContinuePredicate, retryCount, previousTask](bool result)
{
if (result)
{
// Retry the operation.return run_async_with_retry(asyncOperation, asyncContinuePredicate, retryCount + 1);
}
else
{
// Rethrow the original exception.
previousTask.get();
__assume(0); // Will never be reached because the previous get() call throws.
}
// Use current context so that the operation is always called from the same context.
}, concurrency::task_continuation_context::use_current());
}
// Use current context so that the operation is always called from the same context.
}, concurrency::task_continuation_context::use_current());
}

And here’s an example. In ForecastManager::CheckinAsync (ForecastManager.cpp), I call into the weather service to get updated weather info. If the operations fails, I wait 1 second before retrying (guessing here that
the service is temporarily down or otherwise can’t process my request.)

You can replace the predicate function with something else, such as a message box that asks the user whether to retry the operation. I use this function for things other than HTTP requests, such as to reconstruct corrupt image files (see
LocationsPage.xaml.cpp).

Throttling HTTP requests

The World Weather Online service limits Free API usage to 3 calls per second. Therefore, I need to throttle my calls such that no more than 3 are made in a one-second window. To do so, I created the
HttpRequestThrottler class (HttpRequestThrottler.h and
HttpRequestThrottler.cpp), which uses the Asynchronous Agents Library to model the throttling mechanism using dataflow.

I won’t go much into dataflow here, but if you’re interested in this model, I encourage you to check out
Actor-Based Programming with the Asynchronous Agents Library in MSDN Magazine and the
MSDN documetnation. The short story is, dataflow is a nifty way to queue up pending work. I use dataflow so that pending requests can sit in a buffer until the throttling agent is ready to process
them.

To use HttpRequestThrottler, create an instance that’s shared among all code that uses the service. Then call the
HttpRequestHandler::DownloadAsync member function to create a PPL task that completes when the response is received. Here's an example from
WWOWeatherService::GetWeatherInfoAsync (WWOWeatherService.cpp).

I consider the HttpRequestThrottler class to be experimental, as the timing code isn’t perfect. (You may get an email from World Weather Online stating that you've exceeded the throttle limit, but the app will continue
to function.) I don't believe the C++ REST SDK currently reports server events, but perhaps there's a way to more accurately throttle the requests such that you don't exceed the 3 requests/second constraint. If you have experience in this area,
I hope you’ll consider taking a look at work item
1855 - Fix timing code in HttpRequestThrottler so that everyone can benefit from the fix.