iOS – Marc Palmerhttps://marcpalmer.net
iOS Developer & Product DesignerFri, 29 Jun 2018 12:38:11 +0000en-GBhourly1https://wordpress.org/?v=4.9.849665504My interview on iDeveloper podcast, and the new Flint sitehttps://marcpalmer.net/my-interview-on-ideveloper-podcast-and-the-new-flint-site/
Fri, 29 Jun 2018 12:38:11 +0000http://marcpalmer.net/?p=191Read more →]]>This week I was lucky enough to be interviewed by Scotty on his and John’s excellent iDeveloper podcast. This means I also made the flint.tools website live!

In the podcast we talked about the origins of the ideas behind Flint and the basic shift in thinking it leverages to do all the cool stuff for you automatically. I really enjoyed it and I think it’s a great intro for anybody who doesn’t quite understand Flint yet. If you’re not already subscribed to the iDeveloper podcast, you can hear the episode online here.

I have more plans for layout and content improvements over time, as well as some how-to screencasts, but… all in good time!

]]>191Cool new things the WWDC 2018 Keynote could bring to the Flint frameworkhttps://marcpalmer.net/cool-new-things-the-wwdc-2018-keynote-could-bring-to-the-flint-framework/
Tue, 05 Jun 2018 22:21:24 +0000http://marcpalmer.net/?p=187Read more →]]>I’ll be honest, at first I felt there were not many exciting things at this WWDC. Personally I am very happy to see the new notifications improvements and Siri shortcuts, and I can verify after running iOS 12 beta on an iPhone 6 that the performance improvements are very real.

However, on deeper investigation in terms of Flint framework there were some definite points of interest that affect us and some new things we can do in the future when these final OS releases come out, so that we get even more out of our code. If you don’t know what it is, Flint is a small Swift framework that helps you modularise your code around Features and Actions, removing huge amounts of boilerplate and complexity for you when dealing with many common tasks on Apple platforms such as permission checking, publishing NSUserActivity instances, URL mapping, feature flagging and in-app purchases.

I love making things easier for developers and I love apps with deeper integrations into the operating system, and it turns out this is actually a pretty huge WWDC for that.

Flint puts developers in a unique position because your code is already factored out into actions and features, so when new technologies come along based around user activity we can typically integrate them with few, if any changes to our code. Flint knows whenever high level actions are performed, giving it a unique vantage point in your app.

Siri Shortcuts

Siri Shortcuts appear to be an umbrella term for a bunch of related but different APIs and capabilities. I’ll tease these out as far as I am able to at this time. Spoiler: this is fantastic stuff!

Apps can now add custom actions that Siri can trigger, and we can integrate actions into user defined workflows, search, the Apple Watch Siri watch face and HomePod.

NSUserActivity eligibility for predictions

It is not yet clear what exactly is meant by it as there is no documentation, but there is a new NSUserActivity.isEligibleForPrediction property coming in iOS 12 (see it mentioned here and also in the Platforms State of the Union session).

We’ve actually already added support for this to Flint automatic Activities handling on the wwdc2018 branch, but we need to do testing before we can merge it. It should then just be a case of setting static let activityTypes = [.prediction] to enable this for any actions you wish.

This will permit Siri to use the activity in suggestions it provides in “Siri Suggestions” as well as other places in the system. The mystery here is that I thought that was what Siri Pro-active was doing for the last 2-3 years with normal NSUserActivity. I believe that the issue with plain NSUserActivity in this regard is that there is just too much noise from all the activities without any real discernible utility to them. By being more explicit about our intentions, it seems this will help Siri learn more effectively.

Shortcut templates for workflows and related suggestions

There’s a bit to unpack here and more details forthcoming in sessions, but it seems that your apps can give Siri information about custom intents (actions) that can be performed in your app, for use in workflows and voice activation. This sounds like a perfect fit for Flint!

In essence the API allows you to describe actions and the parameterised portions of them, so that these can later be used by the user in the new Shortcuts app (formerly known as Workflow!) and with Siri to save predefined workflows that you can trigger through Siri suggestions on screen or by voice. In the latter case you let the user record their own trigger phrase for the shortcut.

From Flint’s perspective we’ll have to see what we can do here. Hopefully we can add new conventions to the Action protocol much like the custom NSUserActivity handling overrides so that you can describe the Siri Shortcuts-facing parameters of your action and how to marshal to and from that when the user triggers your shortcut or embeds it in a workflow. I’ll have to see the sessions on this during the week to get a clearer idea.

One thing is for sure: the shortcuts are defined using a configuration file rather than at runtime, so we won’t be able to automate that part.

We may however be able to add something like donateToSiri(…) on actions, so you can do things like this:

Declaring “Relevant shortcuts”

A new feature, or perhaps an enhancement of what was; you can now register shortcuts and give clues to their relevancy. My understanding was that Siri Pro-active would do this in the past automatically — sense what you do at different times and places and suggest them automatically. It seems that the Siri watch face on Apple Watch will use this new and more explicit information to surface relevant things to you.

The new APIs around INRelevantShortcut allow you to explicitly say “Here’s a shortcut to do X and it makes sense in the evenings at home” or “This shortcut makes sense when you are near this map region”.

Once we understand more we can be more specific but it seems like it will be possible to add something to Flint along the lines of:

Registering shortcut intents

There’s another way to register shortcuts for Siri to use, and that is to register custom intents. It’s not clear what the advantages are here but it appears that if you use this technique and provide a “Custom Intent” extension, your app’s shortcuts will be available to Siri from other devices like Apple Watch and HomePod.

e.g. with this approach you could say “When is my food getting here?” to your HomePod and it will use the extension on your phone to return information about the delivery ETA.

We haven’t explored Siri Intents in Flint at all to date because the Intent domains were so limited, and it is an area we need to look at soon to see if we can use the Action paradigms there. We’ll do this before the final 1.0 release. We already have Flint.continueActivity for URL mapped Actions so if you declare intents using these, and have extensions that can perform the actions you should be good to go already. YMMV right now though!

It could be we can add a registerIntent function to actions to support this mechanism explicitly in future, reusing your existing actions for this.

macOS running apps built from iOS-compatible UIKit code in future

This announcement is interesting because it supports one of the underlying themes of Flint: to make building cross platform code easy.

The approach appears to be base around easier building from common source. For Flint apps Feature and Action declarations should ideally work on all platforms you support. This is why Flint feature constraints don’t use SDK-specific types; you’ll always be able to compile all your feature definitions on all platforms, even if some permissions are not even applicable to a platform version (and hence the feature isn’t).

This means your logic code doesn’t need to worry about feature definitions being missing on some platforms. They’re there but simply disabled because their platform constraints are not met.

New permission authorisations on macOS

One other significant change is the introduction of camera and microphone access authorisation coming to macOS Mojave (10.14), much like on iOS. Flint will have this covered very soon, using the same .camera and .microphone permissions you use on other platforms.

It seems likely Apple will continue adding these permissions as necessary as they unify the APIs that are common to macOS and iOS in the coming iOS-app-hosting-on-macOS transition.

New macOS Quick Actions in Finder

The new side bar view in Finder that shows Quick Actions relevant to the files you have selected may be something that Flint can integrate with.

More details are needed, but it may be we can “export” your actions like we do NSUserActivity such that you can simply declare an Action as a system Quick Action. We’ll see!

os_signpost logging and Instruments

There is a new API for helping to debug performance issues in Instruments, related to the OSLog APIs that already exist. It ties together logging and markers of where your key processing begins and ends, with visibility of these markets in your Instruments timeline relative to other data you are capturing.

As of last week, Flint already supports os_log as a logging target out of the box for all your contextual logging, giving you rich high-performance logging in Console.app. In the light of this new API we may want to add a new ActionDispatchObserver and a convention so that your actions can easily indicate if they should take part in sign posting so that you can debug their throughput with Instruments.

I would be relatively easy to add a new observer for this, but we’ll need to see what this is like in action to work out what is truly useful. We could of course automatically add it for all actions, but most actions will not do anything too time consuming, but they may spawn work that is… so perhaps we can do something there automatically.

App Store review crackdown on Usage Descriptions

Be warned! One of the first things mentioned in the Platforms State of the Union talk, is a change to App Store review policy where they are now planning to crack down on vague or unhelpful privacy usage description keys in your Info.plist.

While Flint cannot write these for you, we do have a mechanism in place to verify that you have specified the required keys for all the permissions you use in your app. You’ll see these warnings at startup in development where Flint sees you have declared features that require permissions for which you have no usage description yet.

This saves you from surprising assertions during QA testing where somebody hits a feature code path that you didn’t test fully, for which there is a permission required that has a missing usage description in your Info.plist.

That’s all for now

I’ve got a lot of WWDC sessions to watch! Exciting times.

]]>187Hacking my shell prompt so I make fewer mistakes working with Xcode projectshttps://marcpalmer.net/hacking-my-shell-prompt-so-i-make-less-mistakes-working-with-xcode-projects/
Wed, 30 May 2018 11:44:10 +0000http://marcpalmer.net/?p=183Read more →]]>Making mistakes is how we learn. But not all mistakes are equal, and making the same one over and over is not learning.

Often I run two or more different Xcode builds on the same machine, either because a client project can’t yet build on the latest Xcode release, or because we’re in a new Xcode beta period. Of course I forget which one I am running, especially if switching between projects multiple times in the same day. Xcode 9.4 is here today, and Xcode 10 beta is around the corner at next week’s WWDC 2018! It’s arguably the worst time of year for this problem.

Typically I end up running carthage update and get a compiler error, or a fastlane build command that then builds with an older Xcode version. The worst part is how much time this can waste before you realise what is going on. What’s particularly pernicious about this is that if you have multiple shell sessions open at once, it is easy to forget that changes to the Xcode toolchain affect other previously open sessions.

Yesterday I decided to do something about it. I’ve never customised my shell prompt before but I knew it was possible and I suspected it would be simple to show the current version of Xcode toolchain that the shell is using (which is controlled by xcode-select). The intention is that this will hopefully make me continuously aware of the Xcode version the shell is using when I run commands, and at worst if things go wrong I can immediately see why. Note that “Hey it looks like you built this with the wrong version of Xcode” is never the error you see when these problems occur.

What does it look like?

By default it is set up to show your host and working dir, Xcode version — yep that’s the bit with the hammer — and following that your git branch and status, if any. I don’t know about you but I often make mistakes where I am on the wrong branch too, so “sending two fascists to one prison”1 seems like a good plan.

The git parts came from a handy site called http://ezprompt.net for which I take no credit. The Xcode version parsing is done in a cool way thanks to the ever-helpful @danielpunkass who spent the time to speed up my original solution of running xcodebuild -version which took ~100ms every time on my Late 2016 MBP. He replaced it with a call to xcode-select -p and plutil to extract the bundle version from the Info.plist. This gets it down to under 40ms each time on my machine, which is great and does not affect shell usage at all… that’s ~2.5 screen updates at 60FPS so good luck typing faster than that.

To do this yourself all you need to do is add the following code to your ~/.bash_profile file, assuming you use bash. Other shells, YMMV. Let me know @marcpalmerdev if you have instructions for doing the same with other shells and I’ll update this post.

It seems like a small thing but we are all human. “Developer ergonomics” really matter to me. When we cut ourselves on the sharp edges of the tools we use, we should find ways to save ourselves the pain in the future. Many small changes add up to a much better experience.

Me trying to avoid the unfortunate un-vegetarian colloquialism “Killing two birds with one stone”. It is sad how many of these old phrases come from brutality to animals or people, but they were different times. See Animal friendly alternatives for common phrases. ↩

]]>183Using conditional conformances to improve API ergonomics in Flinthttps://marcpalmer.net/using-conditional-conformances-to-improve-api-ergonomics-in-flint/
Fri, 27 Apr 2018 16:22:28 +0000http://marcpalmer.net/?p=175Read more →]]>My new open source framework Flint is pure Swift and involves splitting your code into small actions that when performed are passed a single input and a presenter object.

What is cool about this is that we use Swift’s associated types to allow action implementations to specify the type of the input and presenter. Here’s an example:

This all works great. Your binding has a perform(using:with:completion:) function that only supports the correct types the Action expects. However if you have an action that does not need an input or presenter, you have to use special types Flint defines called NoInput and NoPresenter to satisfy the associated types:

It’s pretty ugly. Thanks to a nudge from @hishnash I looked into how we could eliminate the arguments that are not needed in those cases. I originally tried overloading the perform function with generic functions constrained on InputType and PresenterType but… you can’t do that.

It turns out that since Swift 4 the addition of conditional conformances can help us. We can overload the perform function and constrain each version to specific types for InputType and PresenterType:

That is pretty great. While generics and associated types can be a real pain to use, it’s important to celebrate these great little wins where something quite advanced is possible in a type-safe way with very little effort.

]]>175You’ll never believe what held up Soundproof development for monthshttps://marcpalmer.net/youll-never-believe-what-held-up-soundproof-development-for-months/
Sun, 01 Oct 2017 22:13:35 +0000http://marcpalmer.net/?p=171Read more →]]>Back in Spring of 2015 I started doing some work for a new release of Soundproof, my iOS app for music practice. We’d just been through launch in Autumn 2014 having gone through a rapid migration to Xcode 6 and iOS 8. The plan was to add a few little feature enhancements and push out a release.

That Xcode 6 migration had been rather painful. We used CocoaPods for our dependencies and at that point there were problems with building for Xcode 6 and its new dynamic frameworks, BitCode, the iPhone 6 and 6 Plus with new screen sizes, assets and layout challenges. For long periods it had been difficult to build the code, so later in the Summer of 2015 I decided to rework all our dependencies to work with Carthage, including a few little open source frameworks we pulled in.

Throw in some experimentation with Fastlane and Xcode Bots, and you have a recipe for months of delays while the code was out in the weeds.

By this time I had taken contract work at Upthere, Inc. because, and this will be no surprise to indies, it is very hard to make good money from iOS apps. Nevertheless I tinkered on Soundproof every now and again to try to keep it moving forward.

However I hit a rather painful roadblock. I would run the app in Xcode and it would start… but the debugger would disconnect from the app. I had no idea why this was. It meant that I could not debug any of my code. I couldn’t finish the features I was working on. I couldn’t even get print statements or log output. OK in hindsight I could have set up a persistent file logger but… seriously, finding a fix for the debugger problem couldn’t take that long could it?

Time passed. I investigated Xcode issues. I looked at logs. I asked people for help. Caught between work and the utterly frustrating situation I stopped bothering and got on with the day job.

I tried again in January 2016, and asked the fine people in #code on the Core Intuition Slack. I made no progress and dropped it again. There were less horrific things to deal with after all.

Which brings us to the last few days, where I decided to try again so I could do some Soundproof feature releases of the cool stuff I already had nearly finished for two years. Also, I didn’t want to be defeated by this stupid problem.

The good news is that I have today solved the mystery, with the help of the fine folks on Core Intuition Slack again. Here’s a little walk through of how we got there. Three Xcode releases and three iOS releases after the original problem surfaced.

It is worth noting that in all this time, Soundproof as released to the App Store in 2014 still runs well on iOS through to 11 and on the new devices launched since. That is by design, because we stringently avoid applying weird hacks, workarounds and bending the UI frameworks to our will. We build to last.

That is except for a couple of things which, as luck would have it came back to bite us. (Literally only us as this is just a development problem).

We need to root cause this mofo

So Xcode would run the app, start attaching the debugger, and then as the app completed startup — before it executed any code in main(…) — the debugger would disconnect.

The logs from Xcode “Devices” or Console.app for the Simulator weren’t much help. They are incredibly noisy and have a lot of confusing internal terminology about creating “assertions” (not like assert() you would use in code).

On a real device, you’d get logging like this around the point the debugserver process (as I now know it to be called) was seemingly crashing:

Note the part Formulating report for corpse[387] debugserver. It does seems like debugserver is crashing. If I ran it on the Simulator sometimes I would get a crash in debugserver on the Mac, which would give an unsymbolicated crash trace:

The observant hacker-guru types among you may notice there is a clue in there that I did not know to pick up on until resolving the problem. Anyway, the folks on Core Intuition Slack were great and the recommendation was to strip out code from the app until debugserver stopped crashing.

This classic programming trick is actually a really difficult task in a non-trivial app like Soundproof, especially as the problem occurs before main() is called, so if anything it is related to statically initialised values.

I decided to go the other way: I created a new Xcode Project for a single-view app and added all the Carthage dependencies the main app uses. I thought I should first find out if it is my app code or something I’m pulling in as a dependency.

As luck would have it, the test project crashed debugserver. This was great news. Quickly removing one dependency at a time and running the build after each removal (slightly more fun as there are interdependencies so I had to start with leaf nodes first), I quickly came to the dependency to blame: “GBDeviceInfo”, an open source framework for accessing data about the device the app is running on.

This was fantastic if slightly depressing news, I just had to find out what there was in this framework that was crashing the debugserver process.

The sequence of unfortunate events…

You might wonder why I needed to check for the characteristics of the current iOS device the app was running on. This is generally a bad idea, and I was never happy with it. However, in the iOS 8 era when this code was written, things were a bit rough to say the least.

We just had all the iOS 7/8 blurring trend land on us, but without mature visual effects view support and so we had some manual code to perform a static blur on a large background. This couldn’t run on some of the devices like iPhone 4, 4S and 5 that were still very popular at the time. So I added some code to test the device type to see if we would just apply some alpha instead of a blur in those cases.

In addition to this, there was a fun bug in the middle of the iOS 8 point release cycle, circa iOS 8.0.2. We use AVSpeechSynthesizer to speak the name of the next track for the user, and someone at Apple managed to change the meaning of playback rate 1.0 for the utterances. This resulted in speech that was way too fast, but only on devices running certain point releases of iOS 8. So again we had used the device info dependency as a quick and easy way to test for iOS 8 and apply a different playback rate.

These challenges with iOS 8 meant we needed a quick and easy way to tell which device and iOS release we’re running on, and we pulled in GBDeviceInfo, a handy little framework for this.

We shipped with this and all was fine. Debugging was fine. However due to the CocoaPods debacles with Xcode 6, I had to fork GBDeviceInfo to experimentally hack in my own Carthage support — essentially just adding a new shared Xcode Scheme.

I did this fork in mid-2015 from a newer version of the library than the released App Store version, and it all seemed OK enough. Although that was when I started noticing something funny with the debugging not being reliable in Xcode.

Today, trawling through the commit history of GBDeviceInfo to find something that might cause the problem I was having with debugserver I came across a commit in October 2015 entitled:

Yep. All the while I suspected some obscure debug symbol problem causing the crash but it turns out that the library was working as intended and had previously contained code to prevent debuggers attaching:

(here’s where the observant among you will see that this denies ptrace and ptrace was on the stacktrace from the Simulator debugserver crashes.)

As I had taken my fork of the library before that code to remove the debug prevention landed, and not suspecting I needed anything new from the upstream repository, and with “in progress” Carthage changes in place… I had not pulled those newer changes.

I am not going to be hard on myself about this though. Who on earth would have thought someone would put this code into an open source library?

Ultimately, I should not have incorporated a dependency for which I did not review all the code. I did review it originally when deciding which library to use. However the change came in after we originally released so I would also have had to review all commits in between. I suppose this is the fallacy of using external dependencies. If you factor in all the time to review all the code and all the updates, the gains are not nearly so great. A lot of the benefit comes from trust or wishful thinking frankly.

I have definitely learned a painful lesson. Having GBDeviceInfo available as open source got the app released quicker than if I had to write that stuff myself, and while it is unfortunate for me the author added this pretty bonkers bit of code for a short period of time, it is more an unfortunate sequence of events.

I am pleased however that I can release a new Soundproof build soon… ish.

]]>171AirPods are great but… could be smarterhttps://marcpalmer.net/airpods-are-great-but-could-be-smarter/
Fri, 30 Jun 2017 10:51:30 +0000http://transition.io/?p=165Read more →]]>If you haven’t used AirPods yet, they are very nice compared to previous Bluetooth headphones.

You can pop them in your ear and you hear the “duh dunk” sound as they automatically connect to your phone and start playing whatever you had playing.

However, if you then turn to your iPad and press Play on some music or video, your iPad blasts audio into the office while you continue to hear music from your phone.

This is clearly wrong behaviour. The times when you would want to listen to audio on headphones from one device and let another blast audio out of speakers in the the room are basically zero.

When you play audio on another device of yours, the AirPods should pause the audio on the previous device and switch to the new one. The same thing goes if you turn on the mic… start a video call on your MacBook and it should automatically connect to the AirPods.

Apple have done a great job with the W1 chip so far, and maybe this is actually a hardware limitation and not just a software problem. I hope they improve this soon, as I can’t be the only person to have thought it.

]]>165One possible future for Apple Watchhttps://marcpalmer.net/one-possible-future-for-apple-watch/
Tue, 10 May 2016 10:33:11 +0000http://transition.io/?p=150Read more →]]>Everybody’s talking about the Apple Watch around the anniversary of its first release. Most of the chatter is about how disappointing and how slow it is, with some countering with how useful it is despite this.

I am in agreement with most of these viewpoints, but people who don’t think this is going to change radically in the next year or two are crazy. You only have to look at the utility gulf between a first-generation iPhone and the iPhone 4 to see this.

In the latest episode of the always excellent Upgrade podcast (#88), Myke and Jason mention rumours that mobile data is coming to the next-gen Apple Watch, and seem skeptical. It may be too soon for the 2nd generation, but to me it is a no-brainer that this is going to happen as soon as is practically possible. Often when it seems something like this is imminent from Apple, it usually happens a year or more later than you’d hope.

Pretty much since the Watch came out I’ve been in the minority that think the Watch will “replace” the iPhone. This does not however mean that you won’t have an iPhone-like device any more and do everything on the Watch. That is nonsense.

Let me describe to you a possible future I see for the Apple Watch:

You have a 2nd or 3rd gen Apple Watch. It has 4G or equivalent mobile connectivity and direct wi-fi (which it already has in some respects). In your pocket is a familiar 4”, 4.7” or 5.5” iPhone-like device. Maybe it is called an iPad nano, maybe it is still called iPhone. This doesn’t matter.

What matters is, your watch has high speed data connectivity everywhere. This means it does not need to be near your phone to do all the cool things a Watch can possibly do in the future. Display size will always be a constraint and that is why you will often still have another device with you, but this communications ability pushes the functionality to the upper limits in terms of data.

The device is fully functional in its own right. Your email, calendar updates, text messages (green and blue), push notifications all work wherever you are. This changes everything about the Watch. The Watch is “you” in terms of the mobile network.

At this point people will say:

I don’t want a data plan for my watch!
Battery will drain so fast it will be useless
I don’t want to have multiple numbers

All of this is wrong. Consider that Apple have already implemented Continuity that makes all our devices ring when we get a phone call or message, and we can accept or originate a message or call from any of our devices. The effort to acheive this is huge, and they have continued to refine it.

The SIM currently lives in the iPhone. Apple has already done most of the work here to allow the SIM to live in any of the devices. They have also been working for years to move to a software SIM with resistance to date from the carriers. Eliminating the SIM card and slot would make it easier to integrate this into a Watch, but may not be a prerequisite.

It is not that hard to imagine your Watch becoming the “SIM holder” whether it is a physical SIM or not. Thanks to the already proven Continuity features, everything works as it does now. You can make and receive calls and texts on your iPhone that doesn’t have a SIM in it because you are wearing your Watch which does.

“…but the Watch battery will run out really quickly if all my data and voice traffic from my iPhone is actually going through the Watch”

This is the smart part. If you have an iPhone (or 3G/4G compatible iPad) you already have the radios in those devices with big fat batteries. So they can actually make the mobile network connection themselves, but using the “SIM identity” from your Watch. This is like the Watch gaining superpowers when paired up with another device that has mobile radios. Buy wearing the Watch you are bringing your identity to the larger mobile device instead of the other way around.

The means you don’t need an iPhone at all, and you certainly don’t need an iPhone for your Watch to do everything useful with data and calls. The Watch becomes untethered, and as per the Apple promise of integration across products, things only get better when you also have an iPhone or iPad. Having both a Watch and an iPad could end up costing about the same as buying just an iPhone, but with much more flexibility.

This takes away the emphasis on the iPhone as the “must have” product in the Apple line. You can choose whichever product you want that suits your lifestyle the best be it iPhone, iPad or Macbook, but you can have phone functionality with any and all of them1. It is also worth noting that classic phone functionality is not nearly as valuable to many people these days, especially younger people.

This seems to be a no brainer to me. Most of the technical challenges seem to be solved already, and the whereabouts of the SIM and proxying the identity to other devices seems to be the only sticking point. I bet Apple has already had this working for a few years in the labs, in the guise of the “Software SIM”.

So there it is. I’m not placing any bets on this, but it makes perfect sense to me.

Caveat: Macbook is not likely to see mobile data antennae any time soon but maybe with a SIM in the watch this would come too, as it eliminates the need for another SIM in the laptop, a long-running concern ↩

]]>150Our WWDC keynote party (in ENGLAND)https://marcpalmer.net/our-wwdc-keynote-party-in-england/
Thu, 05 May 2016 09:40:54 +0000http://transition.io/?p=148Read more →]]>
At the beautiful rural co-working space that I run, my iOS App development company will be hosting a free WWDC keynote party here in Chalford near Stroud in England (I have to keep saying this because we keep getting people applying from America).

This is a repeat of events we’ve held for major Apple announcements for the last two years, but bigger and better thanks to the new co-working studio we have.

If you are in the vicinity and want to hang out with some nice friendly iOS and Mac devs, as well as people generally interested in the Apple platforms please grab a ticket.

]]>148Suggested iPhone Lock Screen improvementhttps://marcpalmer.net/suggested-iphone-lock-screen-improvement/
Thu, 05 May 2016 09:10:17 +0000http://transition.io/?p=147Read more →]]>Since the improvements to Touch ID you see many people complaining that they press the Home button to see the lock screen to catch some missed push notification or check the time, and it instead unlocks the phone. The new Apple TV remote has a nice feature where the Apple TV itself dims the display when inactive, and “wakes” up brighter when you lift the remote. Obviously this is using the accelerometers in the remote. Wouldn’t it be great if the iPhone did this to reveal the lock screen?

If Apple made this relatively simple change to iOS, the iPhone could wake up the display when you lift it. To avoid wasting battery all the time by lighting up in your pocket, it should constrain it so that it only does this when the phone is facing up, and the movement was in the upward direction, or when the phone was perpendicular to the ground (in a pocket) and moved to be upward facing.

]]>147iOS development is excitinghttps://marcpalmer.net/ios-development-is-exciting/
Mon, 17 Aug 2015 13:18:31 +0000http://transition.io/?p=138Read more →]]>The pace of change, the reliable yearly cycles, they are hard to keep up with. However I just can’t deny that iOS dev is also consistently exciting. You know there’s cool new stuff coming all the time and a world of interesting possibilities that can affect a huge number of people in the world.
That’s the primary reason for me leaving web app development a couple of years back. Long before that, I had made Java mobile phone games and it was exciting for a short while. You had to do crazy things to fit an entire game into a 64KB zip file… but the novelty quickly wore off as cross-device porting pain and the extremely limited platform hit you.

I’ve been coding iOS since the first SDK was released and I am enthused by it now more than ever, which is pretty amazing. I’m motivated by making things people hold in their hands and use to make their lives better, and as developers we have so much power available to us.