Blog

I’ve spent the last 6 years working as a freelance iOS Developer, building apps for iPhones and iPads. As verbose a language as Objective-C is I’ve grown to enjoy working with it. Last year, Apple launched Swift – a new, modern programming language they offered Apple App Developers an alternative means to build apps.

Swift is a succinct, modern and flexible – great! But it’s also a new language and for Objective-C Devs like myself that means even more new stuff to learn in addition to all the new APIs added in iOS 8.

Like others (not all others!) I’ve procrastinated and delayed using Swift in my apps. It’s still in it’s infancy. However, when the opportunity came up for me to present at a new Brighton-based Mobile Developer Group I decided that enough was enough and it was time to take a break from client work and focus on updating my skills.

As a large part of this learning experience I’ve been building a really cool (if I do say so myself!) iOS and Apple Watch app – an app for parents to remotely track their kids daily iPad/iPod usage via their own iPhone or it’s Apple Watch extension App. In fact the parents can even track their own usage if, like me, you find your phone hard to put down!

I’ll be blogging more about the 3 main topics covered in my presentation (Swift, Adaptive Layout and Apple Watch Dev) in the coming weeks. But for now the main purpose of this blog post is to share access to the source code and workflow videos that accompany my talk.

When developing iOS Apps it’s very rare to come up against a technical challenge that hasn’t already been solved and the solution shared on Stack Overflow or in Apple’s sample code in the iOS Dev Center. Unfortunately I haven’t experienced the same luxury with Mac OS X Development.

Problem is beginSheet:modalForWindow:modalDelegate:didEndSelector:contextInfo: was deprecated a few years ago and appears to no longer work in Yosemite and Mavericks With out-of-date documentation and all Googling resulting in out-of-date solutions to presenting custom modal windows, I went about figuring out the modern solution to this challenge!

The replacement for beginSheet:modalForWindow:modalDelegate:didEndSelector:contextInfo: is to now call beginSheet:completionHandler: on the NSWindow that you want to present from, passing in reference to the instance of your custom NSWindow subclass.

I also discovered a few important steps that you need to implement when presenting sheets…

Step 1: Retain a reference to the NSWindow or NSWindowController that you are creating when presenting a sheet:

Over the Christmas break while many were munching mince-pies and stuffing in the turkey, my son Finn and I were coding games in Unity

Finn’s 11. He’s currently in his first year of Secondary School. He’s a keen gamer on his Xbox and iPad. Tempting him to try and learn a little code to understand what goes on behind the scenes of the games he plays started as a bit of an experiment. We began by chatting about his favourite games. With the list of favourites including Stick Hero and Flappy Bird it was clear that a game didn’t need to have the level of complexity of huge budget hits such as Fifa, Call of Duty or Clash of Clans to make a lasting impression on an 11 year old boy with a short attention span!

I wanted to see if I could rouse Finn’s interest for long enough to get him coding a simple game. Unity was my tool of choice. With my own experience of many tools and languages including iOS, SpriteKit, Android and Flash, Unity is leaps and bounds ahead of the competition as the best tool for making mobile games these days. There’s also a relatively low barrier to entry to get up and running and quickly see immediate results from just a few lines of JavaScript code or simply by dragging and dropping game objects onto your canvas and applying simple built in physics your game objects immediately begin to interact with one another. It’s a very satisfying experience!

Not only is Unity easy to get up and running with, it’s also a very powerful tool that enables developers to publish to a long list of platforms that includes iOS, Android and Window’s Phone. Around 70% of games in Apple’s App Store are now being developed in Unity. I created Poker Royale and Wordsy natively in Objective-C and iOS but with hindsight Unity would have been a better choice

Finn and I followed the course together on our laptops. It was great to see how quickly he was able to pick up the process of navigating the Unity GUI: how easy it was to import graphic assets into his game and then position and size them on the canvas. Asbjørn moves at a fast pace and with his quirky English accent he keeps things quite amusing. Finn and I joked together throughout the course and I noticed how his competitive instinct kicked in while working alongside dad – he was keen to show me each time he finished writing some code before me or discovered a new short cut to get the job done quicker!

Finn has highlighted the similarities between his experience with Unity so far and learning to code with Scratch at school. I don’t know Scratch that well but my understanding is that it provides a simplistic drag and drop UI to help kids to learn the basics of how an application or game is built. It’s clearly a great foundation for taking coding to the next level.

Finn’s interest in coding seems to increase each time we spend a new Unity session together. With each session his understanding of the power of programming progresses and further opens his imagination to all of the possibilities that being a programmer offers! It’ll be interesting to see how this plays out

The idea behind a hackathon is to work with other developers to code/hack new digital features or product prototypes in a short (and intense) period of time. Hackathons will often run overnight and developers work together to maintain their focus and momentum with the help of caffeinated and alcoholic drinks! Not all that healthy but strangely productive. After a night of coding all teams present their hacks the next morning.

I’ve never attended a hackathon before but have recently discovered just how popular they are. Internally Facebook hosts a hackathon every 6 weeks and gives it’s in-house software engineers a chance to hack away on new ideas that apparently often end up becoming fully fledged features on Facebook.com or in their various mobile apps.

With Parse By The Sea Facebook decided to open up their regular hack to external Developers: Developers from all around the UK and Europe, Computer Science Students and in-house Facebook Software Engineers.

There was a fairly even mix of iOS, Android, Unity, Web and Server-Side Developers. In advance of the Hackathon Facebook setup a group for us all where we could start to share ideas and build teams. The ideas started to flow in and initial teams started to form.

An Idea and a Team

Initially I had an idea to build a multiplayer iOS word game using Parse and Pusher where four players would all compete with the same set of letters to form the longest words. All players would see each others game points in realtime creating a very competitive and fun multiplayer game. I was initially quite fixed on this idea. But you’ve only got about 14 hours of code time so an idea like this was optimistic/ambitious.

After the various sponsors had presented their APIs it was 7pm and dinner was served. I started to mingle with other developers at the Hack and met fellow iOS Developer Ben and Android Whizz Jose – also a big thanks to Adam from Pusher who knocked up a simple Node.js server-side for us in about 5 minutes!

Ben and Jose (taken from/by our app!)

We were chatting about our various hack ideas and one of Ben’s ideas was to create a photo app where multiple users could contribute photos to shared albums. This idea really resonated with me. I’m half American and every few years I travel the the US to catch up with my extensive US family. At these family reunions there are often 50 or more family members. Everyone is taking snaps on their smart phones and capturing these special moments. But then what? How do we share these photos just with the family members? There are a number of ways to do this with tools like Dropbox or uploading the photos to Facebook and assigning access permissions. But both of these methods requires all members of the family to sign up to the respective service and both methods require a certain amount of effort by all the photographers to share.

So the idea then developed into an app that would work on both iOS and Android. An app which would connect multiple users and enable them to create and contribute to multiple shared photo albums. As users took photos those photos would update across all connected user’s devices in realtime (instantly).

And that’s just what we built in 14 hours. 2 native apps (iOS and Android) that shared data via Parse and hooked into a simple Pusher backend to instantly stream in new photos as soon as they were saved to Parse. We initially named the app Shared Photos which then became Frictionless Photo Sharing – neither were particularly inspiring names but hey it was 5am FFS!

Frictionless Photo Sharing running on iOS

I’d never met Ben and Jose before the Facebook Hack but we really connected and gelled as a team. It was amazing to work with such talented Developers and the enthusiasm for our product stayed alive throughout the night. None of us got a wink of sleep! There were beds to sleep in and as we passed 4am, 5am, 6am the urge to sleep got greater. But there was a communal sense that we were building something super cool and we all wanted to make it as good as it could be by day break. So we just kept on ploughing on, beer, Redbull and jokes aflowing

4am…

At one point Jose chucked some tenuous(!) link to the Deezer API into the Android version of the app – connecting a music track to each photo just for kicks – well anything to add a shot at more API prizes! The guys next to us were building something in Unity and we considered adding a few pics of their app into ours just to make the Unity connection… lol

Presentation Time

There were some really cool hacks and such a variety of ideas. Here are a few examples:

Too uber geeks decided they’d write a compiler – well why not!

The team next to ours created a 3D sound game in Unity for blind users.

One developer created a drag’n’drop music game that split Deezer tracks into multiple segments, randomised the order and players had to rearrange and rebuild the tune – awesome idea!

Flapdoodle was another memorable hack – an app for party hosts that used Facebook and Deezer APIs to auto-build party playlists from your friend’s music tastes. An especially nice touch was the friend’s face rotating on a record during playback – so that everyone would know just who was responsible for choosing Justin Bieber!

Our Turn

Each team only had just 2 minutes to present. With our hack it was all about the demo. The idea was simple but seeing as we’d used Frictionless in the title of our app I was hoping that the presentation would follow suit!

While Ben and Jose presented the idea and UI on screen I wondered around taking snaps of the audience on my iPhone which instantly streamed into Ben’s version of the app hooked up to the monitor. It’s not that tricky to implement this sort of thing but it makes for a really cool live demo

Our Presentation

Initially we were a bit gutted not to pick up any of the sponsor prizes but our dissapointment was short-lived when Jim announced that the overall hack winners were Frictionless Photo Sharing! It took us a minute or 2 to remember that was us!!

Prize Winners!

We won Parse and Pusher Pro accounts which is super-cool as it ultimately means that we can potentially develop our prototype into a product and ship it without incurring hosting costs for the foreseeable future.

Hacking the Future

I had such an amazing first hackathon experience and working with such talented and enthusiastic teammates enhanced the experience even further. Bring on the next one!

I’m currently on a flight to San Francisco to attend Apple’s WWDC 2012 Conference – with 10 hours of time to kill I thought I’d spend a few hours writing a post about a CoreData bug I discovered recently. I’m also planning on showing the bug to Apple engineers next week so with a bit of luck this issue may get fixed in iOS 6. Fingers crossed.

Apple’s Binary Data attribute type

In iOS 5 Apple introduced a great new Core Data attribute type, Binary Data. This was a great new attribute – prior to iOS 5 it was typical to manage and store images and other binary files to disk outside of Core Data and store references to the files from Core Data entities – not ideal.

Use Case Scenario

A lot of the apps I build make extensive use of caching images so that users of my apps can continue to use my apps even when they’re offline. A good example of this is my latest iPad App, Portfolio Pro. Portfolio Pro is an app for Photographers and Designers to import their photos and videos into the app and then be able to present those binary files to clients in a coffee shop for example. The images need to be cached by the app for offline use.

The Problem

I’ve been using Apple’s Binary Data attribute type to store both the large photographs imported by users of my app and thumbnails for the photos. I’ve updated the app a few times without experiencing any problems accessing the previously cached Core Data binary attributes. Until that is, I tried to attempt a very simple automated Core Data model migration in a recent update. A strange bug occurred. Cached Core Data binary data started disappearing!

What was odd was that all of the cached thumbnails remained after an automated migration but the larger binary data attributes containing the original large sized photographs (2048 pixels long side) became nullified.

I was using exactly the same approach to store the thumbnail binary data and the large photo binary data. So why was one migrating and the other not? In both cases in addition to setting the Core Data attribute type to Binary Data I’d also selected Allows External Storage option. It seemed like the sensible choice, as I’d expect an optimized database framework to read/write large binary files to disk.

Allows External Storage

While investigating the migration issue I was experiencing in Portfolio Pro I discovered that Core Data writes large binary data to disk but not smaller binary data. Exactly what size it uses as the cut-off limit I don’t know but with large photo data I found that Core Data had created external binary files on disk in a subfolder of my app’s documents folder named “_EXTERNAL_DATA” within another folder named “.[MyProjectName]_SUPPORT”. So if your Xcode target is named MyCoolApp then the path to Core Data’s external storage will be [DocumentsFolder]/.MyCoolApp_SUPPORT/_EXTERNAL_DATA

Goodbye External Storage

Once I’d discovered this hidden storage folder I then found the cause of my disappearing photos. When Core Data performs an automated model migration it deletes/resets the _EXTERNAL_DATA folder – goodbye photos! All of my entities were being migrated just fine but the large, externally stored binary attributes were just disappearing because the storage directory was getting nuked in the migration process.

So that’s the bug with migrating a CoreData model that uses Binary Data attributes with Allows External Storage switched on.

See for yourself

I’ve put together an example project to demonstrate the bug. Download the source files here:

Begin by opening up the Xcode project within the Before Migration folder. Run the project in the iOS Simulator. You should get similar results to figure 1.

Figure 1: Before CoreData Migration

The 2 image views are being populated by a single Core Data entity. The entity contains 2 identical Binary Data attribute types. 1 named smallImage and one named largeImage. Both have been set up to Allow External Storage. In the view I’m rendering the smallImage binary data into the first image view and the largeImage binary data into the second image view. Both image views have content mode set to aspect fit.

Close the Before Migration version of the project. Now open After Migration (Bug) version of the same project. The only difference between this version of the project and the first is that I’ve migrated the CoreData model by adding a model version and adding a new unused string attribute named newAttribute to the Core Data Entity.

The project is already set-up correctly for automatic Core Data migration. Build and run. This should replace your previous version of the example app and perform the automatic model migration. But what happens to the binary data attributes of our entity? Here’s what happens, the small image remains and the large image is deleted (see figure 2)!

Figure 2: After CoreData Migration

Ok, point proven. Delete the demo app from your iOS Simulator. Run the original Before Migration version once more – you should now have both images displayed as before. Now for the solution…

The Solution

Now build and run the third version of the example app within the After Migration (Solution) folder. You should still have 2 images displayed after migration – hurray!

The solution I’ve come up with is when your code initializes a persistent store coordinator for your Core Data model run a few checks before attempting automatic migration. Check whether the new model is compatible with the current stored model. If it’s not then you know that Core Data is about to migrate your old model to your new version and in doing so will wipe the external storage folder. Before it does so simply move the external storage folder to a temporary location. Once the migration has completed replace new empty external storage folder generated by Core Data. Here’s the code that you’ll also find in the Model class of the example project within the After Migration (Solution) folder: