Another year, another excellent iteration of 360|iDev in the books! Once again, I thoroughly enjoyed the conference, was honored to speak there, and had a great time visiting Denver. And once again, it provided a great opportunity to take the pulse of the iOS developer community.

Two years ago, after attending my first 360|iDev, I wrote about the iOS developer community wrestling with the adoption of Swift. Last year, I was struck by the way Swift had opened up so many new avenues for iOS developers to pursue. This year, what was most notable was how un-notable Swift has become.

Swift was ubiquitous and uncontroversial. I didn’t see a single line of Objective-C in the talks I attended. At one point, I began asking other attendees if they’d see any Objective-C. Most said no. One told me, "Yes- there was one slide. The speaker warned us ahead of time."

For the overwhelming majority of the community, the only reason to write Objective-C in 2017 is legacy code. The iOS developer community has become Swifty by default.

This is an impressive turn of events. A programming language that was released barely three years ago is now the language of choice on the most valuable computing platform in existence. There have been some bumps in the road (see Swift 2 to 3 transition), but for the most part, Apple and the Swift team deserve a lot of credit for pulling this off.

If you’re one of few holdouts yet to dive into Swift, well, I think it’s time! While the extent of Swift’s success beyond macOS and iOS is yet to be decided, this much now seems certain: Swift will be the language of choice on Apple’s platforms for many years to come.

Widely used techniques for hiring, motivating, and measuring developers aren’t just ineffective- they’re very likely counterproductive to building a healthy team. To hire better developers, we need to stop fixating on technical skills in isolation from other factors. I’ve come this conclusion over the course of my career, but if you’ll bear with me, I have a particular anecdote from my first job out of school that helps explain why.

A Million Dollar Meeting

My group at Boeing maintained product data management software: imagine it as GitHub for collaborating on 3D aircraft designs. It was critical for the workflows of hundreds of engineers (the kind who work with atoms, not bits). I worked on a small subteam that focused on data visualization. Our software extracted heavyweight design data and rendered it in lightweight formats. You could view an entire aircraft, twirl it around in 3D space, then zoom in on a single bolt all with reasonable performance. Engineers used these for frequent “design review” meetings, which were something like a daily standup.

All of this was in place when I started there, but there was a problem: renderings were often wrong. Engineers reported parts were out of place, overlapping, or floating off in space. This wasn’t because the people I worked with weren’t capable programmers. On the contrary, they were very smart and far better programmers than I was as a fresh college grad. This was just a really challenging technical problem. Each aircraft was was organized in a huge tree structure, with millions of nodes and thousands of levels, and was being updated throughout the day. If we rendered the entire assembly from the top down, we could be sure everything was positioned correctly. But doing so took 12+ hours per aircraft.

One developer had written impressive algorithms to detect what parts needed to be re-rendered when something changed. This allowed us to keep the renderings up to date while doing the minimal processing work needed. It worked amazingly well given the complexity of the problem, but there were too many edge cases. Every week, our manager would hear from some engineering group that had been unable to complete their design review. Dozens of engineers, who had all stopped their work for the meeting, would have to go back to their desks having not completed the task. Each time, we would try to figure out where the system had fallen down, then code around that case.

I was woefully unprepared to contribute in a technical way to this issue at this point in my career. I was also new and naive enough to lack “proper” respect for the company’s hulking bureaucracy. Without asking anyone’s permission, I talked about the issue to another employee whom I’d gotten to know. He worked in the group that helped train and support the engineers on software tools. I asked some questions and he took them to various engineering groups to do some poking around. This started a dialog that eventually resulted in a startling conclusion: the engineers didn’t need the renderings to be updated in realtime. Renders which included changes completed the previous day were sufficient.

We threw out a pile of technically-impressive but hard to maintain code. It was replaced by a few nightly cron jobs running programs that were basically one liners: `rootNode.render(path)`! The engineers were happy and my manager stopped hearing complaints. He couldn’t have cared less how technically unimpressive the solution was.

I don’t think it’s an exaggeration to say that this issue was costing Boeing seven figures a year in salary time. That’s without considering the opportunity costs or the effects of increased tension between the groups. If I’d never done another piece of work at Boeing for the nearly three years I was there, they would still have gotten a good return for my salary. Yet there is literally no existing metric for measuring programmer productivity that would have incentivized or recognized this contribution.

In the Weeds

Our industry fetishizes developers who “love digging into challenging technical problems.” I’m one of those developers. Luckily, stumbling into this and other experiences in my career have taught me an important lesson.

If we want to increase our value as software engineers to employers or clients, the single biggest lever available to us is almost never purely technical. Rather, it’s taking our technical knowledge and putting it in the context of the business problems we are being asked to solve.

This isn’t management’s responsibility. It’s ours. The business will inevitably come to us with implementation details. It's our job to elevate the discussion back to the level of actual goals. We must be comfortable conversing at that level and able to determine the real organizational needs. Then, and only then, is our task to find the simplest technical solution to meet those needs.

So what does all this have to do with hiring? Simply put, you want to be finding candidates who are willing and able to do this. You don’t want a software engineer who blindly does what she’s told, nor one who just dives head first into technical challenges because she loves them. You want one who distills those technical challenges into actual business tradeoffs and communicates these with empathy and patience to her colleagues who aren’t developers.

Testing for this isn’t easy, but having candidates whiteboard algorithms, for example, doesn’t even come close. In fact, these and other common practices are probably filtering out exactly the kind of engineers we want. Those left are likely: 1) too eager to jump into a complex technical solution when a simple business fix exists, or 2) so conformist that they’ll do what they’re told even if they suspect it’s not best for the organization. We think we're weeding out weak candidates. We're actually pulling out the best seedlings.

On second thought, maybe asking a candidate to whiteboard algorithms isn’t a bad idea after all. If she politely protests, and provides a thoughtful explanation as to why it’s not a useful exercise, hire her on the spot.

Important footnote: I was inspired to finally write this blog post by two events. The first was seeing Janie Clayton (aka @RedQueenCoder) tweeting and podcasting about her experience interviewing for iOS positions. I was disappointed, if unsurprised, by some of the responses she received. The second was the penultimate CocoaLove conference, which I attended this past weekend. There were several great talks about motivating and measuring teams of developers, especially those by Lydia Martin and David Starke, which sparked some great conversations with other attendees. Much of what I’ve written here crystallized during those tweets, talks, and conversations. Thanks to those folks!

What kind of version control system will eventually replace Git? When I pose this question to fellow developers, I get one of two responses. Some are shocked by the idea that anything will ever replace Git. “Version control is a solved problem and Git is the solution!” Others imagine what I call Git++, a system that is essentially the same as Git, but with some of the common problems and annoyances resolved. Neither of these are likely to be the case.

> Why Git Won

To imagine what might come after Git, we have to remember what Git replaced and why. Git supplanted Subversion not by improving upon it, but by rethinking one of Subversion’s core assumptions. Subversion assumed that revision control had to be centralized. How else could it work, after all? The result of this assumption was a tool where branching was discouraged, merging was painful, and forking was unimaginable.

Git flipped this on its head. “Version control should be distributed! Anyone should be able to clone a repo, modify it locally, and propose the changes to others!” Because of this inversion, we got a tool where branching is trivial, merging is manageable, and forking is a feature. Git isn’t an easy tool to grok at first exposure, but it gained widespread adoption in spite of this specifically because of these characteristics. Git offered benefits to solo developers, in stark contrast to SVN, but it made teams orders of magnitude more productive.

Cause and effect are hard to tease apart, but it’s no coincidence that Git’s adoption corresponded with a Cambrian explosion of mainstream open source projects.

> Whats Next?

Whatever replaces Git, be it next year or next decade, will follow a similar path in doing so. It will not be a small improvement over the model Git already provides. Instead, it will succeed by rethinking one of Git’s foundational principles. In doing so, it will provide orders of magnitude greater productivity for it’s adopters.

> Just Text?

Git assumes code, and everything else, is just diffable text. Git is really good, and really really fast, at diffing text. This is what makes Git great, but it’s also the assumption that gives a future system an opportunity. Code is not just text. Code is highly structured text, conforming to specific lexical grammars. Even in weakly, dynamically typed languages, there is a ton of information in that code a computer can know about statically.

What would a version control system look like that knew something about your code, beyond just textual diffing?

Git will happily check in a syntax error: what if it didn’t? A commit in Git is just a commit, regardless of whether you added a code comment, tweaked a unit test, refactored a method name, added a small bit of functionality, or completely changed the behavior of your entire program. What if these kinds of changes were represented differently?

Git doesn’t know anything about your package manager or semantic versioning. Instead, you check in some kind of configuration file for your package manager of choice. That defines dependencies and declares a version, and it’s your job to keep these things in sync and to make judgement calls about version bumps. In a any given language you may have 2 or 3 or 12 package managers to choose from and support, each with their own config file, which also has to be checked in and kept in sync.

What if, instead, our version control system was also our package manager, and it understood and tracked our dependencies and their versions? What if, since it knew about the nature of the changes made to code, it automatically enforced actual distinctions between bugfix, patch, and breaking changes? Or maybe, if this is being analyzed and enforced by computers rather than humans, those distinctions become less meaningful.

Imagine a library your app depended on released a new version. In that version, they refactored a method name, but didn’t make any changes to behavior. Our hypothetical next gen system would provably know this. Is that still a breaking change? What if the system automatically refactored your code, updating to the new method name, in conjunction to the commit representing the update to the dependency? Sounds scary, but in reality, this is much safer than what we accept today. Sure, a library owner may have kept the public interface the same. “Not a breaking change!” Your code may still compile or run without a source change. “See, non-breaking!” But...if they’ve completely changed the method’s behavior….you’re still screwed, and nothing is enforcing that they didn’t except social contract.

> Stay Tuned

These ideas just scratch the surface of what a more context aware version control system could do, but as the title suggests, I’m just spitballing here. There are lots of challenges to implementing a system like this, many with non-obvious solutions that may take years to develop. Ultimately, though, you can bet on one thing being true: something radically different, and radically better, will eventually come along to replace Git.

Last week marked my second time attending 360|iDev and my first time speaking. It was an absolute pleasure to do both, and as usual I feel like my head is still spinning from all the cool stuff I learned and awesome people I met. As such, it's worth taking a minute to reflect on the week and what I took from it.

Side Note: if you’re interested in my talk, it was about building a compiler in Swift for a simple language called Bitsy!

Last year, there was a clear theme that stuck out at the conference: that the Cocoa community was wrestling with the introduction of Swift. While the evolution and adoption of Swift continues to impact our community, this year it felt less central. In fact, as many changes as Swift has undergone, it feels like only one drop in a sea of new, shiny things contending for Cocoa developers’ attention.

Take a second and think about all the changes our community has seen since the last 360|iDev. From the introduction of a new platform (tvOS), to the rethinking of an older one...twice (watchOS). From new device capabilities (force touch) to new devices themselves (the iPad Pro). From big community milestones (CocoaPods 1.0) to big community shocks (the shuttering of Parse). It's been a busy year, and that’s before you mention the open sourcing of Swift, or the myriad of new APIs in iOS 10 or, well….you get the picture.

The sheer volume of stuff an “iOS” developer is exposed to today is staggering. This was reflected in the conference itself. There were talks about an incredible range of technologies that went well beyond making simple apps: simulations, compilers, ray tracers, neural nets, drones, and scientific computing- to name just a handful. There were “soft” talks focused on indies, sure, but also about development in big teams and for big businesses, and everything in between.

All of this can seem overwhelming and sometimes leave us nostalgic for “simpler times,” but at an event like 360|iDev, it becomes clear these are just the signs of a healthy ecosystem. I met developers working at every size company, including scrappy startups, tech behemoths, traditional Fortune 500 companies, and huge enterprise consulting firms. Of course, my fellow indies and small dev shops were also well represented.

For all the signs it’s harder than ever to make a living in the App Store, it’s certainly a good time to be making a living as an app developer, and that’s precisely because our platform has grown far beyond simple, consumer facing phone apps.

This year, my one big takeaway is that there isn’t one big takeaway, and that’s ok. It means as Cocoa developers we have a myriad of opportunities, interesting problems, and fascinating technologies to explore. This platform has opened far more doors than I would have guessed when I started tinkering with the iPhone SDK a few short years ago.

So, in the coming weeks, as we’re weeping over build errors left by the Swift 3 migrator, or scrambling to get the latest iOS 10 features into our next release, we should remember to take a breath, put things in perspective, and be grateful we have such “problems” to deal with at all!

TL;DR Implementing a programming language isn’t magic! Check out Bitsy if you want to start learning how! Also take a look at bitsy-swift, a simple Bitsy compiler built in Swift.

For the average user, smartphones and computers are black boxes, the inner workings of which are magical and intimidating. We— programmers— are the supposed magicians who know these dark arts. Sadly, most of us just push the mystification down a few levels. We know how to build apps! And websites! Below our slice of the stack? Oh, it’s black magic all the way down.

Here’s a secret: nothing is magic. It's not even as hard as we think.

Abstractions don’t exist to protect poor muggles from the impenetrable wizardry below. Abstractions exist because the layer below stopped scaling to the problems that needed solving.

Take assembly, for example. Impossible! Incomprehensible! Actually it’s pretty straightforward. Do you know how to program? Then you could learn assembly. It’s different and it’s tedious… but it's not magic. It just stopped being enough to write the kinds of programs people wanted to write without major headaches.

To solve this, programmers of yore dreamt up higher level languages and implemented them with compilers and interpreters, which, by the way, turns out to be just about as complicated of a program as you’d want to write in assembly. But you don’t have to build them in assembly or even C/C++ for that matter. You can write a new language in whatever language you know now!

If there is something impenetrable around the lower levels of the stack it's the resources out there for learning about them. They’re often theoretical and opaque, rather than practical or approachable. That’s a shame, because learning about the lower levels can make you a better developer, and it's also just plain fun!

A number of months ago I set out to learn about building a compiler. It’s been an on and off hobby project, and the learning has been slow at times. I’m still far from an expert, but I’ve gotten somewhere and it’s been fun, so I figured I pause for a second and make the path a little easier for the next person down.

Bitsy is a programming language with an unusual goal: it aims to be the best language to target when building your first compiler or interpreter. BitsySpec is a set of tests, and a command line utility for running them, that lets you know how your implementation is coming along. Point BitsySpec at your implementation and you’ll know what’s working and what's not. You can TDD your first compiler!

There’s one implementation of Bitsy so far: bitsy-swift. I’m hoping one day there will be many of them, along with resources and tutorials for every language community. Will yours be next?

Note: I’ll be speaking about building a Bitsy compiler in Swift at 360iDev and Indie DevStock. If you’re attending either, be sure to say hello!

Last month I had the pleasure of giving a talk at the Philadelphia branch of the New York Code + Design Academy (NYCDA). I spoke with a group of students just completing a three month, intensive boot camp in web development with Ruby on Rails and Javascript.

As an iOS developer, my goal was to give them a glance of a "different world." I sought to highlight areas where the greatest contrasts existed between iOS and what they'd been learning. As such, I focused on three main areas:

Building User Interfaces

Programming Languages (especially Swift's static type system)

Deployment

If you're a developer of any kind who is curious about what iOS development looks and feels like, you might find this talk helpful!

Rewrites can be scary, and they shouldn’t be taken on without very careful consideration of the value of a working codebase, even if it is “legacy.” Still, sometimes they are necessary, and launch of KÜDZOO 1.7 to the app store proves they can be done successfully.

Over the past several months, I’ve had the pleasure of working with excellent folks at Jarvus to rewrite the KÜDZOO iOS app, which is now in the hands of the company’s nearly 400,000 users. KÜDZOO is a company that aims to prove “that the carrot is more effective than the stick when it comes to student achievement,” as Forbes recently put it when naming its founders to the 30 under 30 in education.

Many factors were considered in the decision to rewrite. In particular, the need to update the app’s UI to work with the full range of modern iPhone screen sizes weighed heavily. The UI of version 1.6 was built to be “pixel perfect” with older models of the iPhone and was done completely in code. No xibs, no storyboards, and certainly no autolayout. The painstaking process of converting such a codebase to one which was ready to work on today’s range of iPhone sizes was daunting enough to make a full rewrite viable.

An important factor in making the rewrite successful was the strong product management and quality assurance services provided by Jarvus. They also facilitated updates to the app’s backend and API that made writing the frontend straightforward. Working with them was a great experience, and their work ensured that 1.7 shipped without regressions or new bugs.

The rewrite also gave me the opportunity to carefully consider the architecture of the app from the ground up. Version 1.7 uses MVVM for presentation logic and makes heavy use of principles of immutability and one way data flow. All models are immutable as are the view models. Where possible, the app favors the use of pure functions that simply consume data in one form and return a transformation. This allowed for a clean, testable structure that also reduced bugs and regressions:

A given JSON response from the server is transformed into immutable Models

Models are transformed into immutable ViewModels

ViewModels are wired to the Views

As a developer and consultant, nothing is more satisfying than delivering value to a client and seeing something you’ve worked on in the hands of their users. Shipping version 1.7 of the KÜDZOO app was no exception!

Love is a funny word. We use it to describe the deepest, most passionate of human relationships, but also how we feel about cheese fries.

“I love you.”

“I love cheese fries.”

What then, do you make of a “tech conference” that has love in the name and declares itself “A conference about people, not tech.”

The best definition of love I’ve heard is: “To will the good of the other for the sake of the other.” This describes well what I saw on display at the CocoaLove conference here in Philly.

It started with Dave Wiskus’ keynote. It was a raw, real story of the people who had helped him, most especially his dear friend Alex King, who passed away in September. Alex and others had helped Dave with no expectation of being repaid. In fact, they helped him knowing he’d likely never be able to pay them back. They willed what was best for him, simply for his own sake. That’s love.

I felt that same kind of love upon entering the Cocoa community a few years back when I began attending Philly CocoaHeads. People helped me simply for the sake of helping me. If they hadn’t, there’s no way I could have gone from the hobbyist I was at the time to the full time iOS developer I am today. To all of you: thanks.

I don’t think this quality is unique to our community, but I am proud it seems present in such generous proportions.

I attend a few technical conferences each year and I try to come away with resolutions from each. These usually involve new frameworks, languages, or techniques I want to commit to learning more about— to make myself a better programmer.

Coming out of CocoaLove, I have one resolution for the next year: find opportunities to pay forward the love the Cocoa community has shown me— to make myself a better human.

iOS 9 was released yesterday and includes support for web content blockers. The internet is roiled in a debate about the “morality” of using ad blockers and I find myself disagreeing with some very smart people who I respect. As such I feel the need to think carefully about my position and lay it out clearly.

Spoiler alert: there is nothing even remotely immoral about using an ad blocker.

In determining if something is moral or not, there are two pieces we ought to consider: the intention and the object; more simply, the why and the what. If either of these is bad, then action as a whole is immoral.

Intention

If you do good for the wrong reasons, it’s still wrong. If a rich person gives millions of dollars to a politician's charity, but does so to buy influence and power, then the act is wrong though the charitable cause is noble.

If users of content blockers employ them with the intention of harming content creators, then the act is wrong. I can’t speak for everyone, but I use ad blocking to make pages more readable, to make them load faster, to reduce data usage, and protect my privacy. I believe this is the case for most people, if not everyone.

Object

An action can be immoral despite good intentions. If a doctor hopes to cure cancer, but pursues this goal by applying experimental treatments to unknowing patients, her actions are wrong despite being directed toward an ultimate good. Noble ends don’t justify evil means.

Is the usage of a content blocker inherently wrong despite morally neutral intentions? The question boils down to this: do content creators have a right to dictate the way others partake in that which they create.

Legal Rights

Many people incorrectly conflate the law and morality. Actions that are legal can be immoral, and the law itself can be unjust. In the latter case, the moral act is in fact civil disobedience, as it was for Martin Luther King Junior. On the other hand, breaking a law that does not rise to the level of injustice is immoral.

Copyright laws do grant certain rights to content creators, and it is immoral to break those laws even if you disagree with them (as some do); copyright laws are not unjust. If you copy someone’s content and resell it, claim it as your own, or republish it and try to profit from it in some way, this is certainly immoral because it breaks the law.

Copyright, however, does not grant the holder the right to dictate how others partake in their content. It is legal to read the last page of a book first or to skip some boring dialog in a movie, even if the creator of the work would prefer otherwise.

Users of ad blockers are obtaining the content legally and choosing to alter its presentation on their own devices before taking it in. This is perfectly legal and in no way breaking Copyright law.

Intrinsic Rights

Using an ad blocker is not illegal and can’t be deemed immoral on these grounds, but legal acts are not always moral. The question becomes one of essential, or intrinsic morality. Do creators have a moral right to dictate how others partake in their creations, even if the law doesn’t, or can’t, enforce it?

If the writer of a longform piece decrees it ought be read in one sitting, have I done something immoral by reading it in bits? If a musician carefully crafts an album to be listened to as a whole, have I done something wrong by only listening to my favorite song? Of course not.

Creators have no intrinsic right to dictate how their content is taken in. I have no moral obligation to obey their wishes nor do I have any obligation to justify my ignoring them. One could say “If you don’t want to listen to the whole album, then don’t listen to even one song.” Why would it be assumed the receiver of content must make this choice? To what principle can we appeal to grant such moral authority to content creators?

In reality the onus is flipped: if a creator is not willing to have her creation received by others as they choose, then she ought not release it publicly. Those who obtain content through licit means are free to take it in as they please.

Business

I have talked only generally about creators’ rights regarding their content, not about ad filtering or business models. Does it matter specifically that using ad blockers breaks the means by which content creators intended to monetize?

Pretend I place a disclaimer at the top of this blog that says before reading it you must send me $10. Since it is my intention to monetize this way, are you morally obligated to send me $10 or otherwise close the page before reading the content? Are you stealing from me if you read the piece but don’t send the cash? Obviously not.

If an artist displays a painting in a public space, and then places an ad next to it, is it immoral to enjoy the piece but ignore the ad? If I stand gazing at the work but hold my hand up to block the ad, have I done something wrong? Am I stealing form the artist? No. The artist has simply chosen a dumb business model.

In 2015, display ads on the web are fast becoming a dumb business model.

The Future

Technology and business moves fast and this can be scary. As one business model is subverted, it can be tempting to fall into the trap of believing nothing else will emerge, and therefore treat the change as morally perilous. Open source is communist, ride sharing is evil, robots are taking all our jobs, etc, etc... This same moral outrage is being directed at users of a content blockers and as usual it is actually a misplaced fear of change.

Display ads on websites are not the best business model we can come up with for digital content. Soon there will 5 billion people in the world connected to the internet via smartphones. I have a hard time believing they will have nothing good to read or watch because content blockers killed display ads.

I had the pleasure of giving a short talk at this month's PhillyCocoa meeting about implementing a simple screen that happened to contain some repetition. I reviewed four different approaches in Interface Builder and discussed the tradeoffs involved each. I hope you enjoy it!

Last week I had the pleasure of attending the 2015 360|iDev iOS developers conference in Denver, Colorado. I attended for the first time this year- taking advantage of its location to visit family in the area. I’m glad I did, as the city is clean and beautiful and the conference was top notch. There were lots of great talks and plenty of takeaways, but there was one theme kept coming up in the sessions I attended and the conversations I had with other attendees.

The Cocoa developer community is very much wrestling with the introduction of Swift. It’s still up in the air how the new language will change our community and how we will have an impact on its maturation.

Swift’s introduction has lead to a flurry of experimentation within a community that is traditionally more conservative and set in its ways. This is a good thing. There is much to learn from other language communities, and Swift enables us to embrace new patterns and architectures that were impossible or clumsy in Objective-C.

Several talks demonstrated this at 360|iDev, but none more-so than Benjamin Encz’s talk “Safer Swift Code with Value Types.” In it, Benjamin laid out an alternative architecture for app development inspired by Flux- a pattern introduced alongside Facebook’s open source React javascript framework. I highly recommend checking out Benjamin’s talk and the sample code he provided along with it.

Whether or not functional and reactive architectures end up being widely embraced by the Cocoa community, the point is that these patterns are being explored and discussed by everyday iOS developers. This is exciting and healthy for our community.

There is no doubt that in the next few years, “idiomatic Swift” will stabilize. No factor will be greater in determining what that looks like than the first Swift-only frameworks Apple releases. Hopefully the engineers at Apple are taking note of the community’s willingness to explore and learn new ways to think about building great apps.

When I was in college, a close friend of mine got a Blackberry. Though this was less than a decade ago, owning a smartphone was still fairly uncommon at the time. I distinctly remember telling him how crazy I thought it was that his phone buzzed every single time he got an email.

Fast forward to today and if my smartphone isn’t in my pocket or next to me I feel hopelessly disconnected. How did we get from there to here?

THE "TOO CONNECTED" EVOLUTION

First, the feeling of being disconnected without a smartphone happened surprisingly quickly at an individual level for early adopters. It turns out the benefits outweighed the annoyance I’d feared. Having an always connected computer in your pocket quickly rewires your own internal expectations.

Next, the rise of mobile computing led to an evolution in the apps and services we use. Most students today probably couldn’t imagine life without Instagram or SnapChat, neither of which make any sense at all unless they’re in your pocket all the time. With the increased connectivity came novel uses of the mobile platform, ones which were predicated on that connectivity.

Finally, the ubiquity of smartphones in mainstream life and work led to a change in social expectations. If I send you an urgent WhatsApp message, I expect some kind of response in short order. Why wouldn’t you have your phone on you, after all? The expectation is that you check it regularly and respond to notifications that matter from a host of channels.

Each step in this evolution reinforced the previous one, and underlying and enabling each was the improvement of the mobile platforms themselves. The hardware got faster but cheaper; sensors were added; the OSes evolved and added powerful APIs for developers to hook into; mobile data networks improved in speed, reliability, availability and price.

Today, almost everyone is attached to their phone, and though we may indulge in lamenting this fact from time to time, the truth is we all kind of love it and our lives are better for it.

BACK AT THE BEGINNING?

Enter Apple Watch, the company’s “most personal device yet.” The watch is a tiny computer you wear, one that physically taps your wrist when you get a notification, one that let’s you take a phone call or dictate a message just by lifting your arm.

Isn’t this a step too far?

After a couple of months with the watch, my take is we’re simply back at the beginning of the same cycle we went through with the smartphone.

Early adopters are already living out the first step- the rewiring of personal expectations. Getting notifications on your wrist and being able to quickly take action is less annoying than it sounds and actually very convenient. Suddenly, if I don’t have my watch on, I feel a bit of unease about my less-connected state.

Like last time around, there’s also no doubt that the devices themselves will get faster, cheaper, gain sensors and provide developers with lots of new opportunities. The question then is whether smartwatches reach the scale needed to progress to the next two phases.

Will we see novel uses of the smartwatch platform that simply wouldn’t work on a smartphone? (Here Apple Pay perhaps provides a hint at what these could look like: connectivity extends into the physical world). Finally, will social expectations themselves begin to bend as more and more people adopt smartwatches? Will paying by physically swiping a credit card eventually make you look as old fashioned as stopping to write a check in the grocery line?

Not necessarily. While I’ve articulated parallels to smartphone adoption, it’s important to understand there are some significant differences. The smartphone took two technologies everyone already agreed were great and brought them together: cellular phones and the internet.

The watch has a lot more convincing to do, and it’s possible it takes a lot longer than the smartphone to make the case. It’s also possible that enormous gravity of the smartphone ecosystem itself keeps smartwatches perpetually in their orbit- nice accessories that many people may own, but not a true platform in their own right.

I do think the convenience of wrist worn computers will eventually lead to their mainstream adoption, a host of novel uses for these devices interacting with the physical world, and finally a shift in social expectations that makes them indispensable. What’s not clear to me is how long it will take for this to occur, but my guess is it will be a slower launch than the meteoric rise of the smartphone.

For most developers, refactoring legacy code is a painful task. For a strange few, myself included, the process can also be oddly gratifying- akin to the satisfaction one feels cleaning up a messy room or restoring an old car. Wherever you fall on that spectrum, knowing when to refactor, and when not to, is a core competency of a good Software Engineer. Like many things in Software Engineering, making that call is more of an art than a science. While we can’t derive a single formula to decide when or how aggressively to refactor, we can understand the tradeoffs involved and then apply this understanding thoughtfully to specific situations.

As a thought experiment let’s imagine the Worst Case Scenario™. You have just inherited a legacy base of very messy, non-idiomatic code with no test coverage and no access to the former developers. The app is live in production and the product owner wants to add features and release new versions in addition to maintaining the production app. How do you handle a situation like this?

At one extreme, you could simply implement new features in the app following the patterns (or anti-patterns) established in the legacy code without doing any refactoring. No technical debt would be paid off in this approach; in fact more debt would be accrued. The time to add the next feature would be short, but the time to add the Nth new feature would be increasingly long as the project began to buckle under the weight of technical debt. Because you’d be making the minimal set of changes to implement the next feature the risk of regression is the lowest. Over time, however, the likelihood of bugs increases.

At the opposite extreme you could refactor the entire application, freezing new feature addition until the codebase was “good.” The time to add the next feature is extremely long, as long as it takes to refactor all the code, but the time to add the Nth feature is very short, because the beautiful refactored code you’ve produced is now a joy to maintain. On the other hand, because you’ve touched every piece of code in the app, the risk of regression for the next feature is very high, but your improved architecture reduces the likelihood of future bugs.

As is usually the case, choosing either extreme is the wrong answer, but considering them helps us understand the tradeoffs involved. In reality, you’ll have to refactor as you add new features, but how high do you turn the refactoring dial? That is the important question.

Refactoring less means the time to implement the very next feature is shorter, but the time to implement Nth feature increases.

Refactoring more means the time to implement the very next feature is longer, but the time to implement the Nth feature decreases.

Refactoring less means the likelihood of regression is lower, but the likelihood of a buggy app in the long term increases.

Refactoring more means the immediate likelihood of regression is higher, but the likelihood of other bugs cropping up in the long term is lower.

Armed with this understanding of the tradeoffs involved, it’s now up to you to apply this knowledge to any given situation. How urgent is the next batch of features? Is there a strong QA team in place likely to catch regressions? Is the app likely to remain in production for the very long term? All of these questions have to be considered.

I’d add that while there is no one answer, there is a rule of thumb I believe applies in most all situations: try not to accrue new technical debt. At a minimum, refactor just enough such that your changes now won’t have to be unwound later. You may not have the luxury of being able to fix what’s already there, but let your additions light a path for how things could be improved in the future.