Will code for food

The developer’s mind

As a developer I’m always thinking about ways to use software to solve a problem. So, when something happens in my personal life that prompts me to think “damn, there should be a (better) way to do this”, I immediately start thinking about ways to solve this with software.

The problem

One day I went to the gym with my wife. While she was in the weight room I was running in the treadmill. The treadmills are on a balcony overlooking the weight room downstairs so I could see my wife from the treadmill depending on where she was.

We both work out with our iPhones playing music or podcasts and at one point I wanted to send her a message to let her know I could see her and I was thinking about her, just to make her laugh. But have you tried running in a treadmill and texting at the same time? It’s way harder than while driving (just kidding, I don’t text and drive). And so the thought inevitably came to my head: “There should be a better way to do this”. And Kush was born.

I laid out the idea in my head and started thinking about ways to make an iPhone app that was easy to use (required very little touches) but, at the same time, had the power to send meaningful messages to your significant other. And that was very nice to look at!

The solution

A couple of days later I had a rough prototype. I installed it on our iPhones and we started using it in our daily lives. The app had a different sentence maker with fewer sentences but already had most kissing sounds it has now.

As we started using Kush it was evident that this was an app worth making not just for our own use but something others could benefit from. Because of the unique kissing sounds, you immediately know when a message arrives it’s from your partner. Knowing that someone was thinking of you is a great feeling. Even though the messages are all pre-made there’s so many to choose from you can see the person chose that message just for you.

Kush has made this type of short and sweet communication more frequent among us. Before Kush I remember thinking many times a day about sending her a quick “Hello” but wouldn’t, either because I was too busy to type a message or because just a “Hello” would be stupid and I couldn’t come up with something that was much different to an “I love you”.

I contacted a designer who is a friend of mine and we started working on the look of the app. As I was thinking about ways to promote the app I just remembered that Valentine’s day was just a couple of weeks away! The thought was not obvious to me before because in Brazil Valentine’s day is in June, not February (we already have Carnival in February…)

The deadline

We decided to try to release this on Valentine’s day. Working on the code like crazy, my wife was writing romantic phrases for the sentence composer and my designer started to draw and produce all graphical elements. We submitted the app for Apple to approve Thursday morning and asked for an expedited review saying it was an app for couples and that we would love to have it for sale by Valentine’s day. They granted me an exception (thanks Apple) and the app is now available in the App Store! We also managed to make a very nice video explaining Kush. Thanks to my designer for the video and his girlfriend for the voice over.

We’re really anxious to see what’s going to happen now. I hope that the app gets used by many many couples; and that they also benefit from Kush the way my wife and I do. One thing I can vouch for personally is: your wife may leave home angry about something, but after a “Kush” she will come home happy to be your wife ;]

How it all started: A love and hate story.

Since the first time I had to use an UIActionSheet or UIAlertView in an app I disliked the way it was implemented. It was a pain if you had two kinds of alerts in the same class for example as everything is done by invoking a delegate method. I also disliked the fact that the code that should be executed in the event of a button was almost always in a separate place in your source code. The code needs a lot of constants, switches and you need to tag your UIAlertView…. I hated it!

But on the other hand they are very useful for asking information in a modal way so I kept using them when it was appropriate.

To see an explanation on how they work, take a look at the blog post that originated PSActionSheet and inspired PSAlertView: “Using Blocks” by Landon Fuller who apparently hates UIActionSheets as much as I do.

Since I found these classes I’ve incorporated them into every one of my projects. And when I took over as lead developer for Arrived’siPhone app I took a few hours right at the beginning of the project to convert every UIActionSheet and every UIAlertView into a BlockActionSheet and BlockAlertView (I renamed the classes to make the name more memorable and descriptive).

A new kind of hate

Arrived has a very distinctive look. I love the design of the app, with lots of textures, the red carpet over the pictures on the stream, the custom buttons, the title, even the tab bar is customized to look unique. So, in the middle of all this very nice color scheme whenever I had to use an Alert View or an Action Sheet I was punched in the face by a freaking blue Alert! How I hated those Alert Views ruining the look of the app.

And then I got TweetBot. What a nice app, what a unique interface and…. what the hell? They customized their Alert Views! Super cool. Right then I thought: I gotta have this….

Hate is a very effective motivator

We then decided to terminate every instance of default Alert View and Action Sheet. Since I already had every call to those wrapped with my own Block* classes, it was just a matter of changing these classes and everything should work as before, but with a much better look.

And so we did it and we decided to open source it. And let me tell you they look great!

But before I send you over to our repository to download this baby, let me tell you how they work and what are the current limitations.

Using the library

If you’re familiar with the above mentioned PLActionSheet and PLAlertView you will have no problems adjusting to these classes as I didn’t change their methods at all. I added some methods to make the class even better but everything that used the old classes worked with no modifications.

You’ll need to import 6 files into your project: 4 for both classes (BlockActionSheet.(h|m) and BlockAlertView.(h|m)) and 2 for another view that serves as the background for the alerts and action sheets, obscuring the window to make it look very modal and make the user focus more on the dialog (BlockBackground.(h|m)). You’ll never have to use this third class directly though as everything is handled by the two main classes. You’ll also need the image assets that we se to draw the view, such as the buttons and background.

To create an alert view you use:

BlockAlertView *alert =[BlockAlertView alertWithTitle:@"Alert Title" message:@"This is a very long message, designed just to show you how smart this class is"];

Then for every button you want you call:

[alert addButtonWithTitle:@"Do something cool" block:^{// Do something cool when this button is pressed}];

You can also add a “Cancel” button and a “Destructive” button (this is one of the improvements that UIAlertView can’t even do):

[alert setCancelButtonWithTitle:@"Please, don't do this" block:^{// Do something or nothing.... This block can even be nil!}];
[alert setDestructiveButtonWithTitle:@"Kill, Kill" block:^{// Do something nasty when this button is pressed}];

When all your buttons are in place, just show:

[alert show];

That’s it! Showing an Action Sheet works almost exactly the same. I won’t bore you here with more code but the repository has a demo project with everything you’ll need.

You can even have more than one cancel and destructive button, despite the fact that the methods are prefixed set and not add, but this is because I wanted to keep the same names I used in the original libraries where you could only have one cancel button. Feel free to rename those if you don’t have any legacy code as I had.

Another cool thing we did was add an animation when showing and hiding the new views as Tweetbot does. This is another area where you can go nuts and add all kinds of animation.

The look of the alerts and action sheets is made of a few assets for the background and the buttons so if you want to change the color scheme all you need is a little time to change ours. Check out the included assets and just change them if they don’t work for you.

The only limitation these classes have so far is with device rotation. As Arrived only works in portrait this is not a problem I needed to solve. And it’s not that trivial because you’d have to reposition the buttons and text because the window now has a different size and the alert might be too tall to hold a long message in landscape. And you might need to add a scroll for some action sheets too. But feel free to fork and fix this!

Gimme that!

You can get the everything you need from out GitHub repository. There’s a demo project with lots of buttons to trigger alerts and action sheets until you get sick of them.

Another thing that’s included in the project but that you might need to roll your own are the graphical assets for the buttons and backgrounds. You can use ours but they might not fit the look of your app.

Now go get the project and have fun with it. Feel free to fork and add pull requests so we can incorporate your changes for everyone.

Launching

Today is the day os the official launch of the PhotoAppLink library. The library is a joint effort of me and Hendrik Kueck from PocketPixels, maker of the ever top selling ColorSplash. We have a website if you want to know the latest about this. This post is to tell the story behind this.

The problem

Since the first version of my first iPhone camera app, Snap, I wanted my users to be able to share their annotated images with as many services as possible. I did the obvious: Twitter, Facebook, Tumblr and I still want to add more to this list.

But one thing was still not possible: how could I share the images with another app? How can I send an image from Snap to Instagram so that users can apply some filters and share? How could I send it to AppX so that users could add filters, frames and a lot more that AppX might offer?

And also the other way around. What if a user takes a picture with AppX and wants to add some text on top of it? AppX might not offer this, but Snap does. Wouldn’t it be nice if AppX could open Snap with an image, Snap could add notes to it and then send the result back to AppX? There is just no way of doing this. Or at least not until right now.

The proposal

What I wanted (you’ll understand the past tense in a moment) to propose was really quite simple, but quite ingenious (or so I thought).

The iOS API allows us to Implement Custom URL Schemes. I wanted every camera or image processing app to implement a custom URL scheme so that we can all exchange images with each other.

So I hashed up a way to Base64 encode an image and send it to another app using these custom URL schemes. It worked well in some tests so I wrote a library and started sharing it with some top devs in the photography section of the app store.

Some people didn’t even respond, but Hendrik Kueck from PocketPixels, maker of the ever top selling and very fun ColorSplash responded telling me he had a similar idea over a year ago (and I thought my idea was so original…) but he didn’t get a lot of people on board so he kinda forgot about it.

He sent me his code and I think my email made him regain his enthusiasm so we decided to iron out a few missing things in the library that would make adoption much easier and try to get more people on board.

So when I checked his library I saw that his idea, even tough it was using custom URL schemes, was to use a custom pasteboard to pass data around from one app to the other. WAY better than Base64 encoding everything. What a revelation that was.

So I threw away most of my code and ported my app to use his code in about an hour. It’s called PhotoAppLink (mine would be called iOSImageShare, even his name is better… damn….) and he even registered a domain for it.

How does it work?

There’s a Readme file with code and a step by step tutorial on how to implement this into your app but first let me explain how it works. It’s really very simple.

When you want to send an image to another app, we just create a custom pasteboard with a common name and paste the image NSData (jpeg encoded to a very high quality) to this pasteboard. We then open a custom url registered by another app.

The system then opens this other app that knows is being called to open a custom url. The app checks the shared pasteboard, gets the image from there and then… well, that’s up to the app. In the case of Snap, I’ll open the annotation screen so that the user can add notes. In ColoSplash it will open the app and prepare it for processing just as if you were getting an image from your Camera roll.

So, all very simple, right? Well, if you’re paying attention there’s one thing that’s missing here: how do I know what URL to open?

Who wants to play?

So, you decided to implement PhotoAppLink on your “soon to be the best” camera app but you feel lonely. You don’t really know to what other apps you can send your images to. Well, not to worry my friendly app, we got a solution for you.

We will host a plist on our photoapplink.com website called photoapplink.plist. This file will contain information about all compatible apps. If you implement PhotoAppLink in your app you just have to send us an email about it with all your info and we’ll add your app to this file.

Our library then simply downloads this file and uses UIApplication’s canOpenURL: to check if the app is installed. The library will also download all the compatible apps’ icons (and cache it) automatically on the background.

When your user wants to send a picture to another app you can use an UIViewController from the library that handles everything, from showing compatible apps to sending your image.

But if you don’t like the interface we built or if it doesn’t fit your app, no problem. The library can provide all the information about compatible apps so that you can build your own interface. Or just change the interface we provide to fit your app.

That’s it?

Well, not quite. If you’re still not convinced that implementing this in your app was a good idea I think this will make you change your mind.

When we present a list of compatible apps to the user we can check what apps the user has installed but we also now have a bunch of apps that the user does not know about. So, in our UIViewController we have a button for “More apps”. This button will present a list of all the compatible apps the user still doesn’t have in a nice table with a nice button to get the app.

This button will open the AppStore app so the user can get this app immediately! AND it uses a link with your Linkshare site ID so you even get a commission on the sale.

So, your app can get more revenue selling other apps AND your app can now be discovered by users of other PhoneAppLink compatible apps. How cool is this!!!!!

And, again, if you don’t like our interface, just change it or roll your own using the information gathered by the library.

Let’s play?

Convinced? Great. There’s a very quick tutorial on how to implement PhotoAppLink in your app. It will take you about an hour if you use the controls we provide there and there’s a test app you can use to test interaction with your app. The whole process should not take more than 4 hours, with testing!

Sorry

Well, I got back from WWDC and there was just too much to do, so I’ve been neglecting my blog a little bit. But since I already missed one post on AltDevBlogADay and today I was about to miss another (3 strikes and I’m out???) I decided to get something quick but maybe useful for all you iOS devs out there.

I’ll share two tricks I recently had to use for Snap. One I learned during one of the labs at WWDC and it’s an old but very hidden trick that’s not covered by NDA so I can share. The other is something that I hacked on my own but got somewhat validated by an Apple engineer I showed to, so not I feel more confident in showing this in public…

First trick

Snap is a camera app and my users were asking me to implement zooming. I studied the API a bit and there was no way to tell the camera to zoom. What I came up with (and Apple’s engineers that work with this API have said it’s the right thing to do) was to change the frame of the camera preview layer so that it would “bleed” out of the view and then give the illusion of zoom. After taking the picture I have to crop but that’s another story.

My problem was that when I changed the frame of the layer, even though I was not applying any animation, the system would animate my change and the interface felt a little weird. It felt like the zoom was “bouncing”. It’s hard to explain but the result was not good and I could not figure out how to remove this animation.

During one the the labs I asked an Apple engineer about this and as he was about to go looking for the answer another attendee from across the table who overheard me said he knew how to do this and very quickly guided us to the documentation where this little trick is hidden.

Second trick

Another problem I faced with Snap was that, even though the saving of the image takes place mostly on the background using block and GCD (more about this on another post…), in order to compose the image I still had to make the user wait. I can do it on the background but that would involve copying a lot of memory and I didn’t want to do this on the iPhone. And it’s fast enough not to be a problem but I didn’t like that the interface froze when I was composing the image with the notes and the user was just staring at an unresponsive device.

So, I decided to use MBProgressHUD to at least show something to the user. My problem was that I had a lot of calls to the method that generates the image and the caller expects to get the UIImage back. As the calls are made on the main run loop and the method takes too long the interface froze and the HUD would not show.

Yes, I could have refactored everything to use GCD and callback blocks but I had to release an update and didn’t have much time. So, I decided to pump the main queue myself:

Even though it’s kind of an ugly hack it can be used in situations where you’ll really have to make the user wait and a synchronous call on the main thread is what you already have or what it’s faster for you to implement.

I don’t recommend this for every situation. There are situations where this might lead to a deadlock in your app, so test this a lot if you decide to use it. It worked for me. And, as I said, I showed this to an Apple engineer during one of the labs and he said it was a good solution to the problem.

I have a fork of MBProgressHUD and have used these principles to build a category for MBProgressHUD that does this AND can even be cancelled by the user. This version is even hackier so I won’t go into it right now for lack of time but if someone wants to read about it just ask in the comments and I’ll do it.

An afterthought about WWDC

One of the things I learned during last year’s WWDC is that, even though the sessions are great, the labs are even better. And as the sessions are not opened for questions and they’re usually out on iTunes for you to watch less than 2 weeks after the event this year my main priority were the labs. I went to every lab that I could think of and even went to some twice.

So, my advice to any WWDC attendees: Forget the sessions and go to the labs! The sessions you can watch later at home but you only have access to these great engineers that are building the stuff we use for these 5 days, so make the most of this. Even if you have a stupid question, don’t be shy and go to a lab and ask the question. These guys are great and are always willing to help. This is consulting from Apple that is well worth the US$1600 bucks. I dare to even say that 1600 is cheap! (don’t tell Apple though…)

I even bumped into a guy that helped me last year that remembered me, my problem and tried to help me again this year even though my problem now was not at all related to his expertise. Nice guy. Thanks again Sam. See you next year.

Oh, and have I mentioned that you should get Snap for your iPhone? Check it out. You don’t know how useful and fun your iPhone camera can be until you have Snap!

Well, that’s it. Sorry for the quick post. I’ll come up with something better next time. And if you have any comments on this post please leave them here and I’ll try to respond and correct whatever you guys come up with.

One thing was missing from the post though: how to get metadata from existing images. In this post I’ll show you a few methods to do this as well as how to use my NSMutableDictionary category to do this.

Getting images using UIImagePickerController

If you’re getting images from an UIImagePickerController you have to implement this delegate method:

In iOS 4.1 or greater your info dictionary has a key called UIImagePickerControllerReferenceURL (for images from the library) or UIImagePickerControllerMediaMetadata (for images taken from the camera). If your info has the UIImagePickerControllerMediaMetadata key, then you just have to initialize your NSMutableDictionary with the NSDictionary you get from the info dictionary:

But if you took an image from the library things are a little more complicated and not obvious at first sight. All you get in a NSURL object. How to get the metadata from this?? Using the AssetsLibrary framework, that’s how!

One caveat on using this: because it uses blocks, there’s no guarantee that your imageMetadata dictionary will be populated when this code runs. In some testing I’ve done it sometimes runs the code inside the block even before the [library autorelease] is executed. But the first time you run this, the code inside the block will only run on another cycle of the apps main loop. So, if you need to use this info right away, it’s better to schedule a method on the run queue for later with:

And you’re done! The category checks for the iOS version and for the correct keys and does everything for you. Just be careful about the issue with blocks I mentioned above.

Reading from the asset library

Well, I kinda spoiled the answer to this one already. If you’re using the AssetsLibrary to read images, you can use the method above, with the same caveat: it might not be accessible until some time after the method is called.

Again I created an init method in my category:

-(id)initFromAssetURL:(NSURL*)assetURL;

Using AVFoundation

iOS 4.0 introduced AVFoundation. AVFoundation gives us a lot of possibilities to work with pictures and the camera. Before iOS 4 if you wanted to take a picture you’d have to use an UIImagePickerController. Now you can use AVFoundation and have a lot of control over the camera, the flash, the preview, etc…

Wrapping up

So, there you have it, now you can read and write metadata.

What’s still missing are some methods to easily extract information from this dictionary. I have already created another method to extract the CLLocation information from it. As I now have a way to get and set this information I even converted it to a @property on the category, giving our NSMutableDictionary a nice way to access the location using the dot notation.

It’s very easy to add getter methods for every property but I have not done so yet. Feel free to fork my repo on GitHub and send pull requests for me to incorporate.

I also added another method to add the image’s digital zoom as the next update of Snap will have digital zoom and I’m writing this information to the pictures as well.

Oh, and have I mentioned that you should get Snap for your iPhone? Check it out. You don’t know how useful and fun your iPhone camera can be until you have Snap!

Does it have to be so hard?

Are you writing a camera app or image editing app for iOS but are clueless on how to add geolocation to your pictures? Baffled by the lack of information in the otherwise very thorough XCode documentation? I feel your pain my friend. Or actually, felt, cause I got your meds right here.

When developing Snap I wanted to add this feature so that it could actually replace the built-in camera app. And since the built-in camera app adds geolocation, along with a lot of other metadata to the images, Snap had to do this too.

I present to you my NSMutableDictionary category that will solve all your problems. Ok, maybe not all, but the ones related to image metadata on iOS anyway.

For those with no patience, here’s the GitHub repo: https://github.com/gpambrozio/GusUtils. The repo contains an XCode project that should compile a nice static library for you to use on your projects. I plan on add a lot of utility classes here, so you might just want to pick and choose whatever you need to use instead of using the whole library.

The category is easy enough to use if you check out the code, but I’ll explain a few things on how to use it for those that never had to deal with image metadata on iOS before.

Who is this metadata person anyway?

For those of you that have no idea what I’m talking about, image metadata is most commonly known as EXIF data, even though that’s slightly wrong because EXIF data is only one type of metadata that can be embedded in an image file. My category deals with EXIF metadata, as well as with TIFF and IPTC metadata, depending on what kind of information you want to add to the image.

For example, the Original Date of an image can be embedded inside an EXIF property or inside a TIFF property. My category knows this and if you want to embed this date it will set both properties for you.

You can see all this metadata using most image viewers. On OSX, if you press cmd-i on the Preview app you can see an image’s metadata.

How does it work on iOS?

iOS SDK 4.1 introduced some methods that allowed an app to write image metadata in an image. One example is ALAssetsLibrary’s:

That takes a NSDictionary as the metadata source. What the documentation doesn’t explain (or at least I could not find) is how this dictionary should be. I googled a lot and found some examples online that I used as a starting point for the category (sorry, can’t remember most of them…).

Turns out that this dictionary consists of a lot of other NSDictionaries with key/values that are dependent on the type of metadata you’re adding. You can find all the dictionaries that go inside this dictionary (I know…. even I’m getting confused with so many dictionaries…) in the CGImageProperties Reference of the documentation.

I’ll try to explain with an example. Say you want to add a “Description” property in your image. This property sits inside the TIFF dictionary. So, in order to add this information to your metadata dictionary you can use this code:

Why am I using NSMutableDictionary? Well, in this example you really don’t have to, but say you want to add another TIFF property to your metadata, with NSMutableDictionary you can just add another key/value to the tiffMetadata dictionary. If you used NSDictionary you’d have to create a new NSDictionary with the old key/values plus the new key/value. Not cool….

Adding geolocation is even harder. Geolocation has it’s own dictionary with a lot of possible values that are NOT explained in the documentation. The best information I found about this was in this StackOverflow question that I used as the basis for my implementation.

Please, help, I don’t want to do this…

The NSMutableDDictionary+ImageMetadata category takes all this complexity away from your code. To add geolocation to your metadata dictionary, all you have to do is this:

Where location is a CLLocation instance. That’s it. My category will create the appropriate dictionary and add it to your NSMutableDictionary with all the appropriate key/values. I’ve implemented some other interesting setters and there are some helper methods that make it very easy to add methods for other properties:

Getting metadata

There’s another hard to find issue with metadata and that’s getting it from an image you just took using UIImagePickerController or an AVCaptureStillImageOutput. I’ll deal with this problem in another post but rest assured that out friendly category will help you a lot too. (UPDATE: The reading part in on this blog post)

Can I use this?

Yes, use it, fork it, spread the word. And if you make any improvements to your fork, or if you found a bug or a better way to do things, please send me a pull request so that I can incorporate your improvements into the main branch.

And if you really want t help me out and get a nice app at the same time, get Snap for your iPhone. Best 2 bucks you’ll spend today!