It’s hard to believe it’s been two years (and a bit) since I blogged about AirPlay and the lack of API to display images from iOS.

Changes

Not a lot changed about it since then… my code still works (good news) and Apple did not release anything new, for developers at least, for the Apple TV (sad news) and, like each year, I still have hopes for the next WWDC (no news, only rumours).

Things also changed for Xamarin (more good news) and, as of today, we now have a stableunified API to target both 32bits and 64bits iOS devices (and satisfy Apple’s upcoming requirements for new iOS applications).

So the holidays were a good time to update my library (for another project of mine) to be build against the unified API : the shiny new Xamarin.iOS.dll (instead of monotouch.dll).

The upgrade process was quite simple. Xamarin Studio can handle a large part of it (e.g. new project types, assembly references, namespaces and API changes…). Only a few minor changes were required, in my case some (maybe 3) typecasts for native types.

Note: for classic API users I’ve kept a separate branch, named classic, to keep the old code.

3, 2, 1… action!

Something else also changed recently… it’s now possible, with iOS 8.x, to create extensions. Specifically this code could be used to create an action extension. That would be very similar, UI-wise, for the existing sample application. However it would also allow other, existing, applications (at least those using UIActivity) to gain access the same image-on-AirPlay feature.

In fact that’s exactly what I wished when I wrote the original code… and now I could (and did) implement this in my github repo. Hopefully it will be useful to others, in itself or as another sample of how you can create extensions using Xamarin.iOS.

A call to arms

If you have popular samples, github repos… please consider updating them to the unified API so they can continue to be useful to the community :-)

In case you have not noticed NUnitLite has picked up speed in the last few months.

Xamarin.iOS 7.0 shipped with NUnitLite 0.9 – itself announced right before WWDC2013, i.e. when the 6.9 series began its summer-alpha life. Then just a few days before iOS7 went gold (and way too close to update it) the one-dot-oh version was released.

NUnitLite has grown quite a lot since we started using the 0.6 version two years ago. IMHO the most recent and exciting enhancement is related to async test support. That’s something I missed a lot from the Silverlight test harness we used back then for Moonlight.

What’s presently shipping with Xamarin.iOS 7.0 already has support for async tests – but don’t stop reading yet! Still there are a few issues because the runner (Touch.Unit) needs to run some code on the main (UI) thread – which was never an issue before executing async tests (yep, that means testing the test runner). The known issues were fixed along with the update to 1.0, so stuff like this will now work as expected:

So if you’re curious about the latest features (async or not) and can’t wait for the next Xamarin.iOS then I encourage you to try it today right off my github repositories for Touch.Unit (the iOS based runner) and NUnitLite. Please file any issue so we can make 1.0 shine in a future release of Xamarin.iOS!

and by random changes I mean changes to Random. Here’s what’s affected by the change:

Performance

The old algorithm used by Mono was not very efficient, even less on system where floating-point computations are slow. The new algorithm, JKISS, is faster and does not require floating-point math unless you ask for a System.Double value.

The requirement to use floating point, basically calling the protected Sample method, was removed in .NET 2.0 – but never implemented in Mono.

The difference is more visible on ARM devices where floating-point computation are not very fast. E.g. The same test built/executed with Xamarin.iOS 7.0, AOT, ARMv7+LLVM (iPod 4th gen running iOS7) took:

87 910 ms with the old code

28 658 ms with the new code

Period

Beside not being very efficient the old algorithm had a short period (2^55-1) before it starts repeating the same pseudo random data stream. The new algorithm has a longer period (2^127).

A: Yes, but not 32 bits of randomness, i.e. the API returns a non-negative System.Int32 value, so [0,Int32.MaxValue]. The only way to get negative integers is to use the Next(int,int) overload.

Backward Compatibility

If you depend on predicable random numbers, i.e. always the same random stream from the same seed (e.g. common in simulation software) then the change of algorithm will affect you.

In such case the best advice is to bundle your own random implementation inside your application (and, of course, have your own unit tests) to ensure no options/bugs interfere with your results. That way you’re shielded from platform and/or language changes.

I love my retina iPad (and retina iPod Touch, but no retina MacBook yet) and I hate seeing applications that does not support them, it’s wasting pixels. However I’m not exactly an artist – as seen in yesterday’s screenshots.

In that particular case I think the “tv” text makes a great icon – even if it may be a bit too localized (is AppleTV translated to something else anywhere ?). Anyway it’s clear to me that using a bit of code, like below, to generate images is a good way to support any number of screen resolution / density for mobile applications.

This technique was refined by the folks at PixelCut, creator of PaintCodeApp, which let you create CoreGraphics-API compatible source code (either in Objective C or C#) from images you draw (instead of type).

This application is fantastic unless you realize you’re lacking a minimum of artistic talent :( Thanksfully other people have this talent, like the folks doing FontAwesome which offers a lot of nice, useful icons in several formats.

One such format is SVG, specifically SVG paths, which is something we had to implement in Moonlight (that seems a lifetime ago). Translating this C++ code into a C# library was pretty easy. In fact it was a lot easier than doing the original code – parts of the spec are not trivial, e.g. you can find quite a few SVG (path) library or converters that simply ignore arcs (and even other constructs).

Next, using this new library I created a small (less than 100 lines) tool to convert all symbols into a (rather big, around 15k lines) C# source file containing all SVG paths, each in it’s own method. That generated file was then used in a MonoTouchsample app (again less than 100 lines) to show them in a table view (using MonoTouch.Dialog).

Now this is not really new, Touch.Unit has been using this for a while. New or not, consider this as my xmas gift to other art-challenged developers in needs of nice, scalable icons for their mobile apps ;-)

Also if you’re not using C# or CoreGraphics then you might want to contribute other backends. The library (and command-line tool) are made to be extensible – but I do not have immediate needs (or plans) for other language / toolkit at the moment.

This post should not be a big surprise. As you might have guessed I had other ideas with my AppleTV.

I think there’s a large, untapped potential in the AppleTV to be used with/from other devices. Collaboration, visualization and of course gaming (think of the Wii U) comes to mind. Sadly several pieces are missing for this to become reality :( Let’s try to add one of them…

If you played the iOS API for AirPlay you already know it does not really support pictures, i.e. what the (Apple supplied) Photo.app does is not available to developers. In particular:

The device selector (available thru MPVolumeView) shows every AirPlay devices, including AirExpress for speakers. Photo.app somehow filters them since they can’t show pictures;

It does not allow you to provide what’s to be shown on the AppleTV, that must be done with the MPVolumeView which is not picture friendly;

The closest you can get is screen mirroring (only on recent iOS devices) and that’s not quite the same since your (iOS device) screen is not available to other uses.

Note: I know private API exists to do this but, like the jailbreaking, I do not want my app to depend on them.

Like I showed before using .NET to show pictures to an AppleTV is quite simple. What’s needed is a UI for this…

And here’s come Poupou.AirPlay assembly that provides browsing AirPlay devices and sending pictures to them. It works on any iPad and iPhone/iPod Touch (at least everything running iOS5, I did not test earlier versions – feedback appreciated).

To select a AirPlay device the code uses UIActivity, available in iOS 6 (and later).

We (as a family) have been looking for a large digital frame. My wife takes a lot of pictures (mostly kids and vacations) and showing them was always… less than optimal.

We have a few, small 7-8 inches, digital frames in the house – but we do not update them very often. Copying files around SD cards is just not fun enough;

The frames have a lot of options but we can never get the pictures to be shown long enough for our taste. That would be closer to 10 minutes and definitively not a 30 seconds slideshow;

The most common alternatives are to show pictures on the TV (we still have to get the pictures there) or sit in front of the computer (no file copying but not confortable for many people). We don’t do that unless we have friends or family visiting us;

So we wanted something larger with the right options – the kind of set and forget.

It turns out there are not many large digital frames out there. Most of them are using computer monitors with additonal hardware, sometime an (jailbroken) AppleTV. The biggest addition, to the common hardware, is the price markup on them.

Now in a totally, until recently, unrelated subject I’ve been a fan of the AppleTV for a while. However the sad truth is I don’t use my AppleTV. Why ? I can’t (yet?) create apps for it and bandwidth caps, which are almost universal in Canada, makes Netflix less attractive (and I have a dozen Netflix compatible devices anyway).

Does a large (e.g. 23 inch) monitor and an AppleTV makes everything fine ? Not quite.

We do not want to move pictures (to the cloud or a device). If it takes extra steps then it won’t be done regularly enough to make it useful. The files are already in the computer/network drives;

We want the picture to be shown for more than a few seconds. The only right predetermined time is the one we decide.

The AppleTV itself does not do that and, for various reasons, I did not want to jailbreak it. Thanksfully the AirPlay protocol was reverse engineered so it is possible to send pictures from a computer to an AppleTV. The device was not bought with this in mind, nor did I look for the AirPlay protocol only for this, but it turned out really easy (less than 100 C# lines) to make it work just like we wanted, which is:

Scan a directory (and subdirectories) for pictures;

Shows pictures randomly – but never twice before starting again (by re-scaning the directories);

Send pictures to the AppleTV and wait for X seconds before showing the next one, i.e. the program, not the device, control the delay between pictures.

I know people will rejoice having Assert.AreEqual(x,y) back as it is simpler than the Assert.That (x, Is.EqualTo (y)) syntax. Personally I’m more happy about Assert.Throw as I really like this one over the [ExpectedException] attribute. There’s a lot of new features / attributes, many I never used (since the Mono-shipped version of NUnit did not have them), that should prove useful.

At this stage the only missing piece, IMHO (but shared with others too), is the lack of async testing – which I grown fond of during Moonlight‘s development.

Anyway this post is to announce that Touch.Unit was updated to use this new 0.7 version of NUnitLite. Existing (old) features should be working fine (MonoTouch bots seems quite happy with it) but I have not yet used (tested ?) most of the new features. If you find anything wrong please fill a bug report !

Availability: The next version of MonoTouch 5.3.x will provide the updated Touch.Unit runner but if you can’t wait (or update) then pull the update from github and “test” it away :-)