2014-12-04T15:28:18+01:00http://adoptioncurve.net/Octopress2014-12-04T14:40:15+01:00http://adoptioncurve.net/archives/2014/12/where-has-the-ios8-simulator-goneWith Xcode 6 and the new iOS8 Simulators, Apple thought it would be a good idea to move the location of the Simulator files. That’s not usually a problem, because there’s only a few specific circumstances when you need to access the files underlying the Simulator’s functions.

If you do need to access those files, the chances are that you can’t find them anymore.

Instead of looking in

~/Library/Application Support/iPhone Simulator/

you’ll find them instead in

~/Library/Developer/CoreSimulator/Devices/*

Which at least is easier to embed into scripts thanks to the lack of spaces in the path.

]]>2014-08-16T12:30:24+02:00http://adoptioncurve.net/archives/2014/08/working-with-size-classes-in-interface-builderAndroid used to be notorious amongst iOS developers for its practically infinite permutations of interface size. Viewed from the iOS world, this used to look like a problem, because iOS didn’t really provide much in the way of support for building interfaces of different sizes.

If you were building a universal app that supported both iPhone and iPad, there was a tendency to end up with a lot of if deviceType == kIpad-style code.

AutoLayout was the first part of fixing that problem, and the job’s been completed with iOS 8 and size classes. This is probably the least-sexy feature introduced in 8, but it’s definitely one of the more important.

Some quick background on size classes

There are currently two size classes – horizontal and vertical, and each one comes in two sizes – regular and compact. The current orientation of the device can be described as a combination of the sizes:

Horizontal regular, vertical regular: iPad in either orientation

Horizontal compact, vertical regular: iPhone portrait

Horizontal regular, vertical compact: no current device

Horizontal compact, vertical compact: iPhone landscape

Storyboards and nib files now support these size classes – so you can think of them as having up to four different layouts contained within the same file. At the bottom of the Interface Builder window, there’s now a control that allows you to switch between each combination:

Every control or AutoLayout constraint can exist in one, several or all of the size classes. This means that you can build interfaces that change depending on device type and/or orientation without any code. Controls can appear or disappear, change size, or change arrangements – all based on the layout that you create in Interface Builder.

Checking size classes

To demonstrate how the size classes change as the device rotates, you can use the a new callback method that’s called as the interface changes:

How to build adaptive layouts

To demonstrate this, this is a simple universal app that changes layout depending on device and orientation. You can download the full project from here, or follow along. I’m assuming that you’re using Xcode 6 Beta 5 at a minimum.

To start with, create a new universal Single View application. This will create an App Delegate, a view controller and a storyboard. I’m going to ignore the code entirely, and focus on the Storyboard alone (if you’re working with xib files, the process is exactly the same).

Before getting going, it’s worth doing a bit of tweaking to Xcode so that you can see what’s going on. If the Assistant Editor isn’t visible, open it on the right (View –> Assistant Editor –> Assistant Editors on Right) and then switch the Assistant Editor to Preview mode:

This will show a preview of the storyboard in the right hand pane – by default, it’s set up for a 4” iPhone. You can add other devices by clicking on the + icon at the bottom of the preview and selecting the device(s) that you need. They appear side-by-side in the preview pane, so if you haven’t invested in a Thunderbolt Display so far, now would be a good time to do so.

The basic layout

By default, Interface Builder creates a Storyboard with a square ‘Any, Any’ layout – in other words, anything you do with this layout will be common across all devices.

We’re going to start by centering a red block with a constant border. Drag a UIView object into the main view, set the size to 200 x 200, center it in the superview and set the background colour to red. In the Storyboard, it will look like this:

However, if you look at the preview (or run the app in the Simulator) for each device, you’ll see that things look very different:

To fix this, we need to add some AutoLayout constraints. Add constraints to pin the leading, trailing, top and bottom spaces to 50 points:

Run the app again, and this time the red block will be placed correctly regardless of which device you use (you can also change the size of the device in the Resizable Simulator, and the layout will still work.)

Altering constraints in different layouts

So far, we haven’t done anything that previous versions of iOS and Xcode could do. To demonstrate how the layout can change between devices, we’ll alter things so that in landscape, the red block fills the entire screen.

To do this we need to change the layout classes that the constraints are added to. If you select a constraint in the object tree then view the Attributes inspector, you’ll see an Installed checkbox, with a small + to the left:

By default this constraint is installed in all layout classes – what we need to do is to add the appropriate layout classes so that the constraints can be added to the appropriate one. Click the + button, then add a compact width, regular height layout:

When you add the new layout, the constraint will be automatically added to both layouts – deselect the Installed checkbox to remove it from the default layout:

Repeat the same process for the other three constraints. As you remove the constraints from the default layout, you’ll see them disappear from the main Interface Builder pane, and become greyed out in the view tree:

To see the size class where the constraints are active, change it using the selectors at the bottom of the Interface Builder pane:

Once you’ve switched layout, you’ll see them reappear:

If you run the app again in the iPhone Simulator everything will look fine in portrait orientation. Rotate into landscape, though, and things go horribly wrong – the red block disappears, albeit with a nice animation. This is because there are no layout contraints present in this layout class combination, so AutoLayout does its best to figure out what should happen and gets it wrong. To fix this, we need to add constraints for the landscape scenario.

In Interface Builder, switch the layout class to Any width, Compact height:

Now add four new AutoLayout contraints – leading, trailing, top and bottom spaces – and set the constant value for each one to 0. Note that each constraint is added to the Any, Compact layout and not the default:

Run the app again, and as you rotate the Simulator into landscape the red block will be animated to fill the screen:

Adding and removing views in different layouts

As well as adding. removing and changing contraints between different layouts, you can do the same thing with views and controls. This could allow you to build completely different interfaces in portrait and landscape orientations; or as an alternative to separate interfaces for different classes of device.

To illustrate this, we’ll update the current interface to add a white block that appears in landscape:

Start by switching to Any width, Any height and adding a UIView to the interface. Make sure it’s a sibling view of the red block, and set its background colour to white.

Next, change to Any width, Compact height and add constraints to set a 50 point inset on all four edges.

If you run the Simulator now, you’ll see that the white block is the correct size and position in landscape, but the transition between landscape and portrait could be better:

This is because the view animates between the constraints in each size class combination, and at the moment those in portrait are undefined.

To fix this, switch to Compact width, Regular height and add constraints to centre the white block and fix its height and width to zero. You’ll also need to reduce the height and width constraint priorities to 750 to prevent a clash between size and inset constraints.

This fixes the start point of the portrait-to-landscape animation, and the end point of the landscape-to-portrait. Because the start and end points are defined, the transition of the white block is smoothly animated:

Where to go from here

By using size classes and constraints, it’s going to be possible to build up responsive interfaces in a way that wasn’t feasible before iOS8. As well as simplifying the management of device rotation, you can also create universal interfaces – something that should make targeting multiple device types a lot easier.

The other interesting extrapolation here is that we’ve now got all the tools we need to build interfaces for any size of device – whether it’s an iPhone 6 with a large screen; or apps for Apple TV; or even CarPlay. Maybe the long-awaited Apple TV SDK might be on the way…?

]]>2014-08-02T16:19:38+02:00http://adoptioncurve.net/archives/2014/08/the-many-forms-of-swift-functions-a-cheatsheetThere are a somewhat bewildering variety of forms that a Swift function can take, depending on the permutations of parameters and return values that you want to use. Here’s a cheat summary:

]]>2014-07-20T12:33:34+02:00http://adoptioncurve.net/archives/2014/07/a-minimum-viable-tableview-in-swiftThis GitHub repo is a minimum viable implementation of a UITableView in Swift. Here’s a swift (badum, tish) tutorial on creating a UITableView using the new language.

This code has been updated for Swift 1.0 and Xcode 6.0.1

The project consists consists of a single storyboard with a table view control, and a view controller written in Swift:

Housekeeping

create a String constant as a cell identifier (Swift is clever enough to infer the fact that it’s a String when we define the constant):

let cellIdentifier = "cellIdentifier"

define an Array variable to contain the table’s data. This is an Array of Strings, and we initialize this as empty as we declare it:

var tableData = [String]()

define a UITableView outlet to connect the table in the Storyboard:

@IBOutlet var tableView: UITableView?

The outlet’s defined as an optional, because it won’t exist until the view has been instantiated (that’s a similar approach to declaring outlets as weak with Objective-C).

You’ll also need to setup the Storyboard with a UITableView object that connects with the view controller as dataSource and delegate, and connects the view controller’s tableView outlet with the table view.

Registering a UITableViewCell class with the table

So that the table can use UITableViewCell objects, we’ll need to register the UITableViewCell class for use with the cellIdentifier that we created in the stage above. The most logical place to do this is in the viewDidLoad method:

Swift’s way of returning the class name for use with the registerClass method is subtly different from Objective-C – rather than using something along the lines of [UICollectionViewCell class], in Swift you use UICollectionViewCell.self.

Creating some table data

Next, we need to create some data for the table to work with. This is going to be held in the tableData array:

for index in 0...100 {
self.tableData.append("Item \(index)")
}

This uses Swift’s for-in loop to create 100 Strings and append them to the tableData array. The \(...) format is a huge improvement on the legacy NSString stringWithFormat: style…

Telling the table about sections and rows

The UITableView datasource methods haven’t changed in iOS8 – you still need to inform the table how many sections and rows it will have.

(Technically, the table view will assume that it has one section unless you tell it otherwise, but I usually implement the numberOfSectionsInTableview method out of sheer habit):

Once again, this is very straight-forward: we dequeue a cell with the appropriate identifier, configure its contents, and return it to the table view. The as operator is force-downcasting whatever is returned from the tableView into a UITableViewCell.

Cell selection

Although that’s all that’s needed to get the table view up and running, here’s how a simple UITableViewDelegate method is implemented:

You’ll notice that the last handler parameter of the UIAlertAction takes a block: this means that you can implement whatever actions you want to occur once the button has been tapped in-place, rather than having to link the alert controller to the view controller through a delegate.

That makes for much less code, and also keeps the action together with UIAlertController (helping to make the code more readable).

If you wanted to log something to the console, say, you could implement this:

It’s compact, certainly, but this is an example of something that does make me wonder why quite so many people rave about Swift as being more readable than Objective-C – I’m really not sure this is actually so much of an improvement.

Summary

One of the killer features of Swift is the backwards-compatibility of it with the existing legacy of Objective-C and the iOS frameworks. This is a good example of how the basic design patterns of the controls haven’t changed at all – moving to Swift is just a case of getting to grips with the new syntax and language features.

]]>2014-07-16T16:56:00+02:00http://adoptioncurve.net/archives/2014/07/creating-a-draggable-uicollectionviewcellSo here’s the situation – you’re creating an interactive UICollectionView, and you want to be able to drag a cell around the screen with a touch. To provide user feedback, you want the contents of the cell to follow the user’s finger as it moves around.

The problem is that unless you’re using a completely custom collection view layout, you can’t move the cell itself. The collection view is in charge of where things are displayed, and it’s a major pain to override this – especially if you’re using a flow layout. Reimplementing UICollectionViewFlowLayout from scratch is a decidedly non-trivial undertaking.

The answer lies in a hack. Create a copy of the contents of the cell as an image, then drag this around the screen underneath your finger. Much easier.

Here’s an example – it assumes that you’ve previously created and attached a UIPanGestureRecognizer to the collection view, and tied this to a method called handlePan: in your view controller. There’s also a UIImageView property on the view controller called movingCell.

When the pan gesture recognizer fires, it calls the handlePan: method with itself as a parameter.

A UIPanGestureRecognizer has three states that we’re interested in – UIGestureRecognizerStateBegan (which is fired as the first touch starts), UIGestureRecognizerStateChanged(which fires as the touch moves) and UIGestureRecognizerStateEnded (which fires as the finger is lifted).

We hook into the UIGestureRecognizerStateBegan event, and get the location where the pan gesture is occurring:

Finally, we use this UIImage to populate a UIImageView property, and update the center of the UIImageView so that it lies underneath the current location of the touch. I’ve also tweaked the image’s opacity to make it slightly translucent:

This implementation simply removes the pseudo-cell from the screen when the touch finishes, but there’s no reason why you can’t do something like insert it back into the collection view at the point where it was ‘dropped’. I’ll put the code for this up in another post.

]]>2014-07-10T15:01:00+02:00http://adoptioncurve.net/archives/2014/07/mocking-uicollectionviewlayoutsAt the heart of custom UICollectionViewLayouts are lots of calculations, and creating/debugging these by hand can be painful. It’s easier in the long run to write tests to help with this – but setting up the stack of objects to make the tests run can be a bit involved.

Here’s how I’m doing it – using XCTest and OCMock, although there’s no reason why this approach won’t work with other test/mock frameworks like Kiwi etc.

123456789101112131415

-(void)testCalculateSpokeRadiusReturnsCorrectValueForTwoItems{UICollectionView*collectionView=[[UICollectionViewalloc]initWithFrame:CGRectMake(0,0,500,500)collectionViewLayout:self.customLayout];idcollectionViewMock=OCMPartialMock(collectionView);[[[collectionViewMockstub]andReturnValue:@(1)]numberOfItemsInSection:0];[collectionViewMocksetCollectionViewLayout:self.customLayout];[self.customLayoutsetItemSize:CGSizeMake(100,100)];[self.customLayoutsetSidePadding:10.0f];XCTAssertEqual([self.customLayoutcalculateSpokeRadius],190.0f,@"should be 190.0f");}

With this you, you can then stub out the numberOfItemsInSection: method and return the number of items you want to run the calculations for – by mocking out this method, you’ve got no dependencies on your datasources.

The advantage of using a partial mock is that you only need to stub out the methods that you want to control – you can use everything else as you would with the real, live object.

Here, I’ve created a helper method inside the custom layout to calculate the radius from the centre of the collection view for various sizes of layout. That’s often an easier approach to take – calculating layout attributes like item centre often involves some fiddly maths, so by breaking it up into chunks of helper methods you can test each bit piece-by-piece.

This tends to be easier in the long run than doing everything in one fell swoop, because you can spend a long time down the rabbit hole of figuring out where the layout is going wrong. With this test, I can throw various sizes of collection view at the layout, and check that things will still work out OK.

]]>2014-07-08T15:35:00+02:00http://adoptioncurve.net/archives/2014/07/time-for-a-changeA quick update, as it’s easier to put it up here than condense down into 140 characters for a tweet. As of this week, I am no longer with Centralway – I remain under NDA, so that’s all the details I can share.

This means I’m available for hire. I do several things:

application architecture: figuring out how the user interactions, front-end, back-end and data pieces of a service need to work in order to play nicely together.

team management: building and running development teams, with all the challenges that are involved in a group of different skills and personalities.

iOS development: hands-on code-cutting of apps.

All of these I’ve done in a variety of organisation shapes, sizes and cultures – so I’ve got to be quite good at the spinning plates and herding cats that tend to be involved in getting a service up and running in today’s online world. Ideally, I’m looking for something that draws on all four of those areas.

Location-wise, I’m interested in pretty-much anywhere in mainland Europe. Somewhere English/German-speaking would be a bonus.

Any leads will be gratefully received.

]]>2014-06-08T12:53:00+02:00http://adoptioncurve.net/archives/2014/06/why-swift-isnt-going-to-change-the-app-industry-just-yetOne of the common reactions to Monday’s announcement of Apple’s new Swift language was that it’s lowered the bar for iOS development. We can look now forward to the dawning of a halycon age where apps have never been easier and cheaper to create.

The other way of looking at this idea is that as an established iOS developer, your industry might be about to get invaded by millions of newbies who haven’t had to earn their stripes learning the intricacies of ObjectiveC. Your rates are about to go down, and the age of demand exceeding supply is over.

Both those are simplistic, and missing the bigger point. The bar to entry to this industry hasn’t moved at all – if anything, it’s been raised.

And that’s because becoming a competent developer who can earn a professional living writing code isn’t about mastering one area, it’s about mastering FOUR.

It doesn’t matter if you’re writing mobile apps in a language that was announced 24 hours ago; or banking systems in a language that was designed by people who have since died of a ripe old age. If you don’t understand these four areas, you’ll suck as a developer.

You need to know the language, the paradigms, the frameworks and the environment of the platform that you’re building for.

Knowledge of the language

In order to build any kind of software, you need a working knowledge of the language that you’re using. To learn a language can be the work of a few hours; to master it can be the work of a lifetime.

Irrespective of where you lie along the continuum, there’s a certain minimum amount of expertise that you will need to get by. You need a working knowledge of grammar in order to make yourself understood in speech, and coding is no different – understanding the syntax of a language is a prerequisite to being able to work with it.

Knowledge of the paradigms

When you think hard about what programming actually is, it gets philosophical in a way that you might not expect. You’re in the business of taking tangible, real-world problems and reconstructing them in an intangible mental domain.

We talk about objects as if they’re concrete physical entities – but building an app is actually a process of making a whole series of imaginary constructs, and then getting them to interact with each other.

To do that, you need a working knowledge of the underlying paradigms. This is taking the physical form of the language and making use of it. In order to use objects to break down a problem, you need to understand what they are.

Knowing only the details of a specific language isn’t going to much use here – without the ability to comprehend the patterns of mental abstractions that you are expressing using the code of the language that you’ve chosen.

Knowledge of the frameworks

Knowing what an object is, and how you create one with Swift or Objective-C or whatever language you’re working with, still isn’t enough. Next, you need a knowledge of the frameworks.

While it might be possible to build every aspect of an iOS app from scratch, that’s not a practical way to work. So instead we rely on the frameworks that Apple provides. Table views are a standard control – but they behave in a particular way, and you need an understanding of how they fit together and operate in order to exploit them.

There are so many frameworks involved in iOS that it’s probably not a practical proposition to even attempt to understand all of them.

But some are unavoidable – you’re not going to get far without knowing even the basics of UIKit, for example.

To understand this takes not just a knowledge of the language that they’re built in, but also the underlying paradigms that they’re exploiting. Knowing how to set a table view’s delegate is one thing, but you also need to know what a delegate actually is.

Knowledge of the environment

Assuming a working ability with the language, the paradigms and the frameworks, you’re now in a position to start building things. But only knowing those three is missing one vital part – and that’s the understanding of the environment in which you’re operating.

The shorthand way of referring to this is user experience – why your interface is laid out the way it is, and the workflows that your app provides in order to solve the particular problem it is dealing with.

But this is a complex mix of physical contraints and psychology. Force your users to rely on voice input in a noisy environment – or large amounts of text input on a tiny keyboard – and they’ll struggle to interact with your app. Forcing them to think about how they need to interact, and they’re likely to give up and find an easier alternative.

Where does this leave Swift?

Swift doesn’t change the fundementals of what it takes to be a competent developer AT ALL. It’s possible that it might make picking up the basics slightly easier, although I’m sceptical. Go beyond the first few pages of the Swift guide, and this doesn’t feel like a toy language designed specifically for ease of learning.

What Swift doesn’t do is lower the bar to understanding the paradigms, the frameworks, or the environment. In fact I’d argue it actually raises the bar, at least for a while. It will probably be several years before you can fully get to grips with iOS without any knowledge of ObjectiveC, so for the interim you’re going to need to learn the details of two languages if you’re just starting out.

It might make building some chunks of functionality quicker, and you might find the syntax more expressive or the structure clearer to read.

But without knowing the abstract concepts of programming, and the details of Cocoa Touch, and the constraints of tiny screens in a world of continuous partial attention, you’re not going to succeed in this industry. So I’m not going to worry about being priced out of my chosen field just yet.

]]>2014-05-29T15:20:00+02:00http://adoptioncurve.net/archives/2014/05/the-cowardly-test-phobes-guide-to-ios-testing-networks(This is the second part of a text-based version of the talk I gave for iOS Con at Skillsmatter in London on Friday 16th May. If you prefer the full multimedia experience, there’s video available behind a login wall at https://skillsmatter.com/skillscasts/5167-tdd-in-ios.)

Testing networks

Unless you’re in the business of writing fairly trivial apps, eventually your code is going to need to talk to some external services reachable across a network link. That immediately opens up a whole world of problems that you need to deal with. Availability, latency, and quality of service are the issues that your app is going to have to handle, while you’re also going to need to make decisions about how to inform your user of what is going on.

It’s very easy to fall into the trap of building apps that work beautifully in the Simulator when sat on a Gigabit ethernet segment downstream from a multi-megabit fibre broadband connection. But the real world isn’t like that – your app needs to be able to handle the flakiest of ropey Edge services, not just full-fat wifi. Forgetting to handle those edge cases is a quick way to build something with really sucky user experiences.

It (should) go without saying that this needs to be tested. But that can be tricky – very often the APIs that your app is talking to aren’t under your control. They’re designed to be reliable and return valid data – so how can you test for the edge cases?

There are a couple of solutions, both of which rely on creating “stunt double” APIs that can stand in for the real thing. By tweaking your mock API, you can develop and test both happy paths and edge cases without any dependencies on live services.

Mocking APIs with servers

If you’re dealing with a relatively trivial API, the simplest option may be to whip up a standalone server and point your app at that for testing purposes. If you know Sinatra or Node, creating a mock API that accepts calls from your app and returns the contents of some predefined data files stored locally to the server isn’t that difficult.

But that a) presupposes that you do have those kind of technologies at your disposal, and b) creates another set of dependencies. In order to run the tests, you’ll need to make sure that your server is up and running, and returning the right values for a given endpoint. What would be far more elegant is a situation where all the moving parts needed for testing could somehow be bundled into your Xcode project.

Then you also need to make sure that your tests are calling the test API, while your live app talks to the production version. You don’t need too much imagination to see what could potentially go wrong here…

Mocking APIs

A practical alternative to a standalone server is a network stubbing library. That sits inside your test target and intercepts any calls to the network in order to return data that you define. One of the most widely used is OHHTTPStubs.

OHHTTPStubs works by catching calls from NSURLConnection and NSURLSession, and checking whether the request should either be passed through as normal or intercepted by the library. If the call is to be intercepted, the library handles creating and returning the data – the call doesn’t get out to the actual network.

There’s also the ability to manipulate the way the response is sent back – for example, setting a simulated delay or latency, returning custom HTTP headers or response codes, or just behaving as if all the packets were dropped. By changing responses, it becomes possible to test a variety of situations ranging from perfect network connectivity to complete isolation.

Setting up OHHTTPStubs

OHHTTPStubs is easiest to install using Cocoapods. Add pod 'OHHTTPStubs' to your Podfile, run pod install, and you’re good to go.

However, now is a good point to introduce a caveat. The library makes extensive use of private frameworks to swizzle the functionality into place, so including it in an app that you try to ship to the App Store is a Very Bad Idea Indeed. There are a couple of ways around this: one is to remember to take it out (not recommended); the other is to only include the library in your test target.

Assuming your project is called MyFantasticApp, then your Podfile should look like this:

As the MyFantasticAppTests target is separate from the main one, OHHTTPStubs won’t get compiled in when you build the project.

How it works

The basic syntax of using OHHTTPStubs looks like this:

1234567891011

[OHHTTPStubs stubRequestsPassingTest:^BOOL(NSURLRequest *request) {
< test the request to see if we want to stub it, and
and return YES if we do >
} withStubResponse:^OHHTTPStubsResponse *(NSURLRequest *request) {
< Create and return an OHHTTPStubsResponse object with the
data that we want to return >
}];

Which request to stub?

In the first block, we’re examining the NSURLRequest to see if it’s one we want to stub out – this allows you to pass through some requests, but catch others.

If you want to stub ALL requests, you simple return YES from this block. Otherwise, you can be more subtle:

How to return data

Once the stubRequestsPassingTest: block has returned YES, you’ll need to create an OHHTTTPStubsResponse object to return to the method that made the original request. This mimics the data payload that the API would return.

There are several ways of doing this:

responseWithData:statusCode:headers: allows you to create an NSData object yourself, and return it along with an HTTP status code and HTTP headers

responseWithFileAtPath:statusCode:headers: allows the contents of a file to be returned, along with status codes and headers. This file can be JSON, HTML, binary data or whatever format your API will return – the only requirement is that it exists in the app bundle where the tests can find it.

Getting sample data

The first thing you’ll need when starting to use OHHTTPStubs is some data to return for your tests.

]]>2014-05-25T21:33:00+02:00http://adoptioncurve.net/archives/2014/05/loading-a-storyboard-programaticallyI am not the world’s greatest fan of Storyboards, having been scarred by an unfortunate project involving a) a huge Storyboard b) multiple developers and c) horrific merge conflicts. That said, some people do like them, so each to their own…

If you manage to overcome the visceral loathing and need to user Storyboards in tests, you’ll hit the problem that they seem to be used semi-magically. There’s no obvious equivalent of an initWithNibFile method that you can hook into to load the thing as you kick your test case off.

The answer is actually reasonably straight forward. Create your storyboard as normal (or rely on an Xcode template if that’s your thing), then make sure that you’ve given it a Storyboard ID in the Attributes inspector. The ID you use is a text string, and Xcode doesn’t seem to care what that string is.

Then in your test, you load and instantiate your view controller in a two stage process – here’s an example Kiwi test:

This loads the storyboard itself, then instantiates the view controller from it – the identifier string is whatever you put in the Storyboard ID field in Interface Builder.

]]>2014-05-25T17:48:00+02:00http://adoptioncurve.net/archives/2014/05/the-cowardly-test-o-phobes-guide-to-ios-testing(This is the first part of a text-based version of the talk I gave for iOS Con at Skillsmatter in London on Friday 16th May. If you prefer the full multimedia experience, there’s video available behind a login wall at https://skillsmatter.com/skillscasts/5167-tdd-in-ios.)

Why is testing scary?

Unit testing is a topic that gets talked about a lot, but if you’re not a computer scientist it can have a tricksy reputation. It doesn’t help that much of the source material available is Java-based. That’s fine if you like Java – I’m personally not so keen – but there’s a lot less help if your primary weapon of choice is Objective-C.

Testing is also a topic that attracts – how shall we put it? – some of the more tedious personalities in this business. There’s nothing so dull as a self-appointed “thought leader”, and a lot of what passes for debate on testing is so much arcane, “how many angels can dance on the head of a pin”-style nonsense.

That’s got very little to do with the day-to-day grind of shipping software. There is no One Way to do this, and if you look hard enough at the motives of those who would have you believe that, you often find it comes down to selling themselves as a brand.

Why is testing important?

All that said, testing is important. Let’s start from the premise that the fewer bugs in your code, the better. If you subscribe to that school of thoughy, then anything that helps you achieve this has to be a good thing. As the developer who writes 100% bug-free code hasn’t been born yet, we’re also faced with the challenge that the more code we write, the more likely it is that the project will turn out to be infested with them.

Documentation is also held to be a Good Thing, but it’s also something that very few devs are particularly keen on doing – especially once the code’s been written. A comprehensive test suite, especially if it’s been built using one of the more “descriptive” tools such as Kiwi or Specta, can almost replace documentation. It’s also considerably easier to read than code, because it’s documenting intent rather than execution – the “what” rather than the “how”.

Perhaps the most important reason for taking a test-driven approach, though, is the way you’ll knit yourself a “security blanket” around your code. We’ve all been in the situation where making changes to an existing code base is a stressful affair, because you’re never quite sure whether changing something over here will break something over there.

If your test suite is comprehensive enough, you can relax to an extent knowing that the tests will catch those kinds of problem. And that becomes particlarly important if you’re working with others, because tests can help catch things that break your code.

The basics

The basic purpose of testing is to ask the question “does my code do what it’s supposed to do.” Assuming that you can give a positive answer to that, it will then help you to ask other, more probing questions. *“Can my code handle unexpected values?” is one of them. Coding for the “happy path” only is a common problem – how can you ensure that your app will still work if the API doesn’t respond, for example. What will happen if the data received back from the API is beyond the bounds you expected, or is corrupt?

Even if your code is capable of handling strange data, you can still end up creating problems for yourself further down the line. As you add more classes and features, the chances of something new breaking something existing multiplies. Protecting each area with tests means that you have a safety net that should catch problems by code in another part of the app.

Why not test last?

The meaning of the word “test” implies checking after the fact – making sure that the code you’re written functions as you expected it to do. The problem with testing code after you’ve written it is that there’s actually very little motivation to do this in normal circumstances. After all, you’ve written the code, it does what it’s supposed to, and you’re infallible – right? Testing that it works is just spending more time doing over the same ground.

That’s definitely the response you’ll get when the project manager wanders over and asks what you’re doing. If the answer is “writing tests for the feature that we shipped last week”, then they’re going to ask when you’re going to get on with more productive stuff. In a time-constrained project, there’s a perverse incentive not to test.

Why test first?

The rhythm of test-first, or test-driven development follows a predictable pattern. First, you write a test which describes the outcome that you’re after. What that will be obviously depends on the type of code you’re writing – in the case of a calculation, it would be a correct result; in the case of a UI feature it would be some kind of update to the views.

Then, you run the test. This seems counter-intuitive, because it fails. It has to fail, after all – you haven’t written any code to make it pass yet. In the jargon, your test has just “gone red”. But in the process of seeing the test fail, you’re already getting clues about how to go about fixing it. It’s like having a small and benign homunculus perched on your shoulder, whispering hints about what to do as the tests progress.

With that advice in mind, you can then write the code that will make the test pass – or “go green”. If at first the test fails, you know you haven’t got it right yet – but you always have the target in mind, because you started by describing the outcome that you were looking for.

Once you’ve got a passing test, you’re then free to improve things safe in the knowledge that the test will catch anything you do to break the code. That’s the “refactor” step, and you’ve now been once round the red-green-refactor loop that is the basic process of test-driven development.

Challenges of testing iOS

Test-driven development, or indeed testing of any kind, has some special challenges in the iOS world. The tools and techniques of testing were developed in a terminal-driven world, so many of the approaches are predicated on the results being something that has no user interface component.

On iOS, on the other hand, practically everything happens in response to some kind of user input passed down through the Cocoa Touch layer. And the code that responds to the touches can be highly modular – an indidivual class rarely stands and operates on its own, but instead has to collaborate with all manner of other classes and external APIs.

These two factors – responding to touches, and highly-modular code architectures – can make test-driven development on iOS seem like an impossible challenge. That’s even more the case if you’ve inherited an existing project, and you’re now faced with the challenge of wrapping tests around an already-built code base.

Fortunately, there are some available tools and techniques which can get around both these problems. In the next post, I’ll dig down into how these work in practice, and how you can take advantage of the structure of Cocoa Touch itself to build apps in a completely test-driven way.

]]>2013-11-18T14:51:00+01:00http://adoptioncurve.net/archives/2013/11/creating-image-callouts-in-omnigraffleHere’s a quick way of drawing a enlarged callout on an image using Omnigraffle.

To avoid pixellation in the callout, your original image will need to be bigger than the main image on the canvas. Place the main image on the canvas and scale it accordingly:

Draw a circle shape onto the canvas, then add another copy of the main image as the shape’s background using the Set Image option in the Image section in the inspector:

A scaled-down version of the main image will appear as the background of the callout shape:

Now click the Manual Sizing button in the Image inspector to revert the shape background to full size:

Then click the Mask button so that the shape is shown as an overlay on the fullsize image:

Now you can use the scaling grab handles at each corner of the shape’s background image to get the callout to the correct size, and move it so that the correct area is show in the callout:

Clicking the Done button in the Image inspector will hide the full background image – at this point you can move the callout into the right place.

To show the zoomed area, drag another Circle shape onto the canvas and set its fill style to none. Now position it to show the area that’s enlarged in the callout. You can also adjust the border of the circles to make them stand out:

Finally, add two tangent lines to join the original area and the callout:

The end result is a neat callout image highlighting a specific area on the main image.

]]>2013-11-18T09:48:00+01:00http://adoptioncurve.net/archives/2013/11/universal-principlesThe UK Cabinet Office is using GDS to run a technology transformation programme, according to their blog. They’ve published the guiding principles that they’re using – and they’re sufficiently flexible to apply pretty much anywhere. Replace specific mentions of government with the sector of your concern, and they’re equally applicable:

Our guiding principles over the next 12-18 months include the following:

* we will start with user needs: until we understand what users across the Cabinet Office want and need, we won’t start buying things

* we will design with choice and flexibility in mind: there will be many and different needs across the department so we will offer technology solutions that fit individuals and teams

* we will be transparent throughout: we will be open about decisions and actions so our users and stakeholders understand why we’re taking a certain approach

* we will architect loosely coupled services: we are not building a “system”; we are delivering a set of devices and services that can be independently replaced. A key success measure for the programme is that we should never have to do it again

* we will favour short contracts: technology changes rapidly and we believe the age of the long-term contract is over. We need to be able to swap services in and out as the need arises

* we will bring the best of consumer technology to the enterprise: modern devices and cloud applications are built to be intuitive and flexible with minimal need for training. We believe business technology should be the same

* we will make security as invisible as possible: we are working with CESG and GSS to ensure all services are secure to new Official level. However, appropriate levels of security shouldn’t get in the way of the user experience of the services

* we will build a long-term capability: technology delivery doesn’t end with the programme. We will not be handing the services over to a single outsource vendor in 2015, but instead will be bringing digital skills back into the department

]]>2013-11-09T16:18:00+01:00http://adoptioncurve.net/archives/2013/11/introducing-actsasbeaconiOS7 introduces support for Bluetooth LE (aka Bluetooth Smart). “LE” stands for Low Energy – a Bluetooth LE device has an incredibly low current draw, which means it can potentially operate for extended periods (think months) on nothing more than a coin cell battery.

Apple are using Bluetooth LE to power iBeacons – a beacon is a low-power device that talks Bluetooth LE and can be detected by the CoreLocation stack. An app can be “woken up” by a beacon, and can use the signals from several beacons to obtain location information. Think indoor GPS with a (potential) accuracy of centimetres.

At the moment, the iBeacon spec is under NDA, which means that beacons themselves are hard to come by. You can pre-order Estimotes which look like they’ll be the simple, pretty but expensive option; or try Kontakt devices (not so pretty, still expensive). Or there’s the Redbear Labs BLE Mini if you prefer bare boards and some soldering.

iOS devices of recent vintage can act as beacons, though – so if you just need some beacons for testing, there’s no reason why you can’t grab a handful of iPod Touches or similar and use these. The other advantage of this approach is that configuring the various beacon parameters is much easier with a iOS device than fiddling around with hardware alternatives.

ActsAsBeacon is a tiny app which turns your iOS7 device into an iBeacon, and has a search function to show details of beacons in the vicinity. It will also allow you to configure the service UUID and broadcast parameters so that it’s possible to experiment with iBeacon-enabled apps.

In the next version that I’m currently tinkering with, the app will also provide a configuration interface for BLE Mini boards – Redbear Labs have an app for this, but it’s a bit broken on iOS7. My version allows the Mini boards to be configured over the air once their firmware has been updated to run the iBeacon version.

The app’s available as a GitHub repo, and I’ll be submitting a version to the App Store in a couple of days so that there’s no dependency on Xcode and a Developer Program license.

]]>2013-10-06T13:46:00+02:00http://adoptioncurve.net/archives/2013/10/removing-storyboards-from-xcode-5-s-default-single-view-app-templateThe new default single-view application template in Xcode 5 is based on Storyboards rather than the previous view-controller-and-nib-files approach. That’s fine if you like Storyboards, but I don’t – so the first thing I do when starting a new project is rip them out and replace them with the old approach.

This is by way of an outboard brain dump to remind myself of how this is done.

Remove the Main.storyboard file

This can simply be deleted.

Update the ProjectName-Info.plist file

Remove the Main storyboard base file name key.

Create a nib file and link to the project’s view controller

Create a nib file (File –> New –> File –> View)

Update the File's Owner's class to whatever the project’s view controller is called

Link the File's Owner'sview outlet to the view object in the nib file

Update the app delegate

Import the project’s view controller’s header file

Update the application:didFinishLaunchingWithOptions: method:

12345678

-(BOOL)application:(UIApplication*)applicationdidFinishLaunchingWithOptions:(NSDictionary*)launchOptions{// Override point for customization after application launch.self.window=[[UIWindowalloc]initWithFrame:[[UIScreenmainScreen]bounds]];MyViewController*viewController=[[MyViewControlleralloc]initWithNibName:@"MyViewController"bundle:nil];self.window.rootViewController=viewController;[self.windowmakeKeyAndVisible];returnYES;}

]]>2013-09-09T15:17:00+02:00http://adoptioncurve.net/archives/2013/09/testing-for-cowards-part-3-testing-the-full-interfaceIntroduction to part 3

This is the third of three posts (part 1 | part 2 )that works thorough the presentation I gave at September’s iOSDevUK conference in Aberystwyth. In the first, I covered the background to test-driven development of the simple traffic lights project I’m using as an example; and looked at building the app’s model layer using a test-driven approach. The second covers testing user interaction by exposing the methods that underlie the interface.

Testing the lights

Once the model and user interaction is tested, the final piece of the jigsaw is testing that the user interface can be successfully updated by the model. This is a somewhat arbitrary division of testing, and I will probably approach things differently in another project.

Having said that, the model-view-controller structure of the app means that there’s something of a natural division between the way that the user interacts with the model (mediated through the user interface) and the way in which the user interface is updated as a result of the model’s behaviour.

The view controller is responsible for handling the lights code returned by the LightEngine and updating the display accordingly. The code is a decimal version of the binary representation of the lights:

The first set of tests check that the updateLightsForCode: method works correctly:

It’s worth noting here that whereas normally you’d try to write code with the minimum of redundancy consistent with readability, with tests that’s not the case. You want the tests to be as clear as possible, and if that means writing lots of code, well, that’s what copy-and-paste was invented for.

It should be immediately obvious what these tests are about, becasue they’re written out in long form. They could be much more concise with a for-each loop and an array of values to iterate across – but then there would be two cognitive loads: one to understand the mechanics of the test, and one to understand the test itself.

Once the operation of the updateLightsForCode: method is proven, then we can look at hooking it up to the UI:

There’s an element of stress testing in this last test – the sequence is repeated 25 times, which should be enough to expose any edge cases when wrapping back to the start of the state machine sequence. And the point about automated testing here is that you could equally choose to test the same sequence 250, 2,500 or 250,000 times – something that would be virtually impossible with manual testing.

Summary

This is a very simple app with a minimum of moving parts – yet it’s got all the elements needed to make for a complex set of interactions. Functions like locking some buttons in response to tapping others can quickly become convoluted and difficult to test thoroughly with a “trained monkey” approach.

Having a set of automated tests can help by allowing the tests to be exhaustive and completely repeatable. This would come into its own if the app was extended in the future – the original set of tests would immediately expose any areas where new functions broke old ones.

The other big advantage of taking a test-driven approach in my opinion is that it forces you to stop and think about the structure of the app at the right moment in the app’s lifecycle. By testing each element in isolation, it’s possible to be sure that one part works before moving onto the next one.

Although that might not be such a big issue in a “toy” app like this, on larger-scale projects (and particularly ones with multi-developer teams), testing will provide a “comfort blanket” that things work in the way which they’re intended to. That’s preventing a cognitive load that comes with uncertainty about the behaviour of areas of code.

Further reading

There’s no shortage of material about testing online and in books, but much of it suffers from the twin problems of a) approaching testing in a quasi-religious dogmatic “test all the things” approach; and b) being very Java-centric.

Kiwi as a framework is heavily influenced by RSpec, and the canonical reference for this is The RSpec Book by David Chelimsky. Although this is Ruby-centric, it’s a good introduction to the processes involved in behaviour and test-driven development.

Rails 4 In Action builds on the concepts covered in The RSpec Book to build out a working Rails site. Again, this is completely focussed on Ruby and Rails, but it does illustrate the BDD and TDD process extremely well.

For an iOS and Objective-C focussed approach, Graham Lee’s Test-Driven iOS Development is excellent. It doesn’t use Kiwi, but is a great introduction to the concepts of testing and practice using SenTest library.

There are relatively-few Kiwi-specific resources around – the GitHub wiki is increasingly comprehensive; and Test Driving iOS Development with Kiwi is a short iBooks title that covers the basics.

And of course, for all other questions and queries there’s the incomparable resource that is Stack Overflow.

]]>2013-09-09T15:13:00+02:00http://adoptioncurve.net/archives/2013/09/testing-for-cowards-part-2-testing-user-interfacesIntroduction to part 2

This is the second of three posts (part 1 | part 3) that works thorough the presentation I gave at September’s iOSDevUK conference in Aberystwyth. In the first, I covered the background to test-driven development of the simple traffic lights project I’m using as an example; and looked at building the app’s model layer using a test-driven approach.

Setting up the user interface for testing

Once the model works, I switch attention to the user interface (specifically, the user interactions). This is where the perception of iOS testing as difficult often arises – how do you test something that relies on a real live user touching something?

The answer is to think of the interaction as involving two layers – there’s the view layer, which the user touches; and the view controller which reacts to the touches with code. So you’re not interested in testing the touch itself – what you’re actually testing is the IBAction method behind the scenes

If you’re happy to assume that the UI controls are actually linked to the underlying method (and you can test that the connections are made correctly if you want to) then the testing process is simply a case of making sure that your didTapSomeButton method does what it should do when the someButton gets tapped.

The other UI testing question that crops up is how you can check the status of interface controls – what colour is the background of a UIView for example?

The reason this becomes an issue is that IBOutlets are normally declared in the view controller’s implementation file; and are encapsulated away from the view of the test.

You could declare them in the header file, but that would break encapsulation and just feels a bit wrong – working code shouldn’t have to change for the sake of tests. The workaround is to declare all the private properties and methods that your test will need access to in a category on your view controller at the top of the test. So the top of the UITests file looks like:

This is pretty straight-forward – we instantiate an instance of appDelegate then fire the application:didFinishLaunchingWithOptions: method which is where the LightEngine is instantiated along with the view controller.

There’s one bit of Kiwi-specific wierdness – it’s necessary to get cast the appDelegate to an NSObject to get the test to compile for some reason that I’ve yet to fathom.

Testing the user interface

Assuming that you’ve got the user interface into a state where the tests can get at it, you can then start testing it. My approach is to start with the initial default state and test that everything is where I expect it to be – this can be useful if you want the UI to load with certain controls disabled, for example. In this case, I want the lights to be black rather than the default gray, and the buttons in the correct state:

When things get to the tick button, we can change approach slightly. The button causes a tick method to be sent to the delegate, which is an ideal place to use a mock object.

All we’re testing here is that the correct message is sent by the view controller – we’ve already tested what the delegate (in this case the LightEngine) will do. So there’s not really any point in going to the effort of creating a real, live LightEngine instance when we can use a mock object instead.

The mock will stand in for the LightEngine much as a stunt double will dive through the plate glass window instead of the movie star with the main billing. So long as the mock looks and behaves like a LightEngine, the view controller will happily accept it as such. This is often referred to as ‘duck typing’ – if it looks like a duck, walks like a duck and quacks like a duck, for our purposes it probably is a duck.

Here we’re created an instance of the KWMock class and telling it to pretend to conform to the LightEngineProtocol. There’s a quick test to make sure that the mock has been created properly, and then the mock is set as the delegate of the view controller.

Then we set an expectation – that the delegateMock will receive a tick message, and we tell it to return @164 when it does. That expectation set, we can then call the didTapTickButton method. If the test gets to the end and the mock hasn’t received the tick message, the test will fail.

At this point, the combination of LightEngine and UI tests will have verified that all aspects of the tick process are working correctly. Next is handling the stop button:

This set of tests is very similar to those which came before – the beforeEach block is where the stop button is tapped, and the tests are checking that the correct messages are sent; the lights change colour correctly, and the buttons get updated.

Testing the user interface

In the next post, I’ll cover testing the methods that update the user interface in response to the LightEngine codes.

]]>2013-09-09T11:42:00+02:00http://adoptioncurve.net/archives/2013/09/a-cowardly-test-o-phobes-presentation-from-iosdevukLast week I gave a presentation at iOSDevUK in Aberystwyth on test-driven development for iOS. This is the first of three posts that go through the presentation itself and some of the background, together with links to other resources. ( Part 2 | Part 3 )

It sometimes seems to me that unit testing on iOS has been a poor relation ever since the framework was released and having talked to some ex-Apple people one evening, it seems that unit testing hasn’t been an approach that Apple have used much internally. This might go some way to explaining the relatively poor state of the tools compared to say, the Rails world.

That said, with a combination of the Apple tools that are available and those created by the iOS community, unit testing and test-driven development are both possible and feasible.

It’s my contention that test-driven development delivers better-quality code, so the purpose of the talk was to try to show that building apps with this approach is something that can and should be done.

Having said that, I don’t have a lot of time for the quasi-religious waffle that tends to characterise discussion of test-driven development. Like any technical topic, there are those who lose sight of the end goal in the search for the “one true way” of doing things. I get paid by the delivered project, so I’m more pragmatic – if it helps me build bug-free code quicker, then I’ll live with the lack of ideological purity.

Bearing that in mind, what follows is here is my take on things and other opinions are available. This is the first of three posts which walks through the presentation code that can be downloaded or cloned from GitHub: https://github.com/timd/TrafficLightTests It’s not a word-for-word transcript of what I said, but this in combination with the test code itself should be clear enough.

The project

The demo project is an iPhone project that runs a simulation of a set of traffic lights running the UK sequence – red; red and amber; green; amber; and back to red. There are two sets of lights – one controlling upstream traffic; and the other controlling downstream.

The app launches with all lights off; tapping the start button launches the sequence with two red lights. From there, tapping the tick button toggles each set of lights through the sequence in turn before repeating. Tapping the stop button at any time reset the sequence back to two reds.

Before the sequence is started by tapping start, the tick and stop buttons are disabled. Once the sequence is in progress, the start button is disabled, and tick and stop are enabled.

The app structure

The app has a model-view-controller structure, and uses a state machine to control the state of the lights as they run through the sequence. The LightEngine class acts as a delegate to the ViewController and has a single tick method that returns an NSNumber light code. That’s a decimal representation of the state of the lights at each stage of the sequence, together with a flag to show whether the traffic is flowing upstream or downstream.

The ViewController class uses the code to control the state of the lights, by making a bitwise comparison of the code’s binary value and toggling each light in turn. It also responds to the start and stop buttons by resetting the lights, LightEngine and button enablement.

The source code

The testing approach

I split the talk into two sections – the first was a quick demo of how to add a test target to a project; and then install the Kiwi framework to replace the default SenTest. I’ll cover those steps in a separate, later post.

The second section looked at building the app from scratch using a test-driven approach. To do to this in 40 minutes while typing frantically was a bit of a tall order, so I cheated slightly by using Xcode snippets to store the code rather than typing; and Git to flip between the project in various states.

The structure of the testing covered three areas:

the LightEngine model, which returns the code for the lights at each step in the sequence

the user interface – wiring up the buttons to call the right methods, and control the initial state of the lights

the lights themselves – taking the code returned from the LightEngine and updating the display accordingly.

That’s by no means the only approach that could be used; but it makes logical sense to me to start from the inside-out by building the model; getting the model to respond to user input; then fully-updating the interface.

Testing the model

The model is a standalone NSObject subclass that has one role in life – to act as a state machine that steps through each permutation of lights in turn, and respond with a code to a tick method. It’s linked to the view controller as the delegate – this isn’t necessarily an architecture that you’d use in real life, but it was a handy way of demonstrating testing of delegates.

“Classical” TDD starts with a blinking cursor in an empty text file, and works from there. That’s not possible with an Xcode project, because the compiler will complain about missing classes – so the first step is to test that the object can be instantiated. Also included in that batch are tests for the existence of the tick method, and that an NSNumber is returned:

The chances of a bin sniffing the address of a Macbook are fairly slim unless you happen to be using it within wifi range – but there are situations where it’s actually useful to change the address. Tom’s example was reconnecting to time-limited public wifi, but there are plenty of others.

It’s not tricky to do, just fiddly – so here’s a quick-and-dirty script that changes the MAC address of a Macbook’s built-in Airport adaptor to a random value (it assumes that the Airport adaptor is known by the system as eth0, but I’ve never come across a Macbook where that wasn’t the case.)