As it does most years around this time, my hometown’s (Fredericton, New Brunswick) Saint John River has flooded its banks, causing some streets to close and leaving general traffic mayhem in its wake. While watching the chaos, I realized it might make for an interesting project for children to explore and learn about. In doing so, they could learn to think in systems of all kinds, from natural systems like waterways and seasons, to human-made systems like urban planning and colonialism. These sorts of systems, and these ways of thinking, are grounded in the world of the child and prepare them to think in powerful ways they’ll find useful throughout their lives.

Here are some things a classroom could do:

Ask the class: who was affected by the flood? Students who take the bus? Who get driven to school? Who walk or bike?

What happens to the city when the river floods? What roads are closed? What happens when those roads close? (Traffic backs up big time; many years there is traffic all the way up the city’s main hill!) How essential are those roads that were closed? All kinds of urban planning questions could be raised and explored here.

What caused the river to flood? Here’s an opportunity to talk about the seasons, snow / ice melt, and how natural systems interact with human / city systems.

It’s also an opportunity to talk about the city of Fredericton’s geography (it’s a river valley city). Why is Fredericton built as a river valley city? (how was it colonized by Europeans? why did they establish it on the river? were there indigenous settlements here before the Europeans came?). This is also a chance to discuss why Archie comics are so relatable to people in North America (there are so many “River Dale” cities, like ours!)

How does flood time compare to other events where streets are closed? (like a parade or when the Prime Minister visits) How does advanced notice help prepare? (and how do you let the city know about it?)

Finally, you could use the flood as a jumping off point for thinking about what kinds of tools you might use to think about a flood. You can look at it from a city planning perspective, with paper maps, rulers, etc. What do the maps show? What do the maps ignore? Are there different kinds of maps? How would you apply what you learn to other cities that have different layouts or geography? How might you deal with the flood differently? How might you prepare for it? What kinds of tools let you think about cities abstractly?

Fredericton’s annual flood is a concrete event that’s happening in the students’s city, in the place they’re familiar with, and it’s something that can prompt questions and get them to think about bigger pictures.

Compare this to the way maths (or worse, computers) are taught today: entirely abstract and detached reality, devoid of meaning and to most kids, utility. None of the things I discussed really require a computer, but it’s interesting to consider how you might explore those questions with the aid of a computer (and if such a computer doesn’t exist, what’s your wildest fantasy for what such a computer might look like that does?).

What an astonishing thing a book is. It’s a flat object made from a tree with flexible parts on which are imprinted lots of funny dark squiggles. But one glance at it and you’re inside the mind of another person, maybe somebody dead for thousands of years. Across the millennia, an author is speaking clearly and silently inside your head, directly to you. Writing is perhaps the greatest of human inventions, binding together people who never knew each other, citizens of distant epochs. Books break the shackles of time. A book is proof that humans are capable of working magic.

There is a Mall. It is owned by a company, but its doors are open to the public. It calls itself a town square. But it is a mall. It is owned by a company.

There are stores in this mall, where people can shop. There are chairs and benches in this mall, where people can talk.

There are nice people in this mall. There are mean people in this mall. There are Nazis in this mall, and racists, and sexists, and homophobes, too. There are more nice people than mean people, but the mean people are much, much louder in this mall. The nice people feel threatened in this mall.

From time to time an announcement comes over the PA system in the mall. It’s the mall owners. They say, “How can we make this mall better?” to which all the nice people cry out, “Kick out the mean people! the Nazis, the racists, the sexists, the homophobes too! They make us feel scared.”

Undeterred, the mall owners will reply over the PA system “We want to make the conversations in this mall healthier. We think the problem is the chairs and the benches are just not comfortable enough. We will be installing new chairs and new benches so you can have better conversations. Perhaps we’ll also fix the lights and the front door. Thank you for your input, we value your opinions.”

Hundreds of news articles are written about the coming improvements to the mall, breathlessly reporting about the upholstery of the new chairs, of the seating capacity of the new benches.

The man who owns the mall puts down the newspaper, proudly reflecting on a good day’s work. He sleeps eight hours that night, same as he does every night.

Outside, airplanes appear rather graceful, glimmering and gliding through the air. Sleek metal birds. But their insides betray this grace. Inside, they’re endlessly noisy. This rattles, that trembles, these hiss. Metals clink and clank. Outside, the wings wobble and roar while exhaust farts behind us and stains the sky.

~ ~ ~

This morning, while waiting to board my flight at an NY airport, I was saddened to see two Port Authority police officers (and later, a third) confront a man, white, fifties, who was at the gate filming out the window.

They asked him for his ID, his name, his travel agenda, and of course, why he dared film the airplanes? The man said he’s a video producer and he does it as a hobby. They asked to see his phone and his camera.

You see, they explained in gentle tones, that it’s suspicious to film in an airport and that of course it’s their duty to investigate, you understand, of course. While they did not speak forcefully, I refuse to describe them as “polite.” You can’t gang up on a man, as a trio of armed officers politely. It’s incompatible with politeness.

It’s so astonishingly sad that someone filming airplanes is considered suspicious, because that logic doesn’t hold up against the simplest of scrutiny. It’s action movie logic.

The officers, after taking notes, eventually told the man to “have a nice day” and left him alone. He didn’t appear obviously, outwardly distressed, but I could see that he was. This rattled, that trembled. He zipped up his bags before disappearing, if I had to guess, to go throw up in a garbage can from the stress.

~ ~ ~

Birds on the other hand are graceful, from what I can tell. No bird need roar in flight. No bird need shitstain the sky while it flaps, flutters, and glides. But maybe it does hiss and clank and rattle inside a bird. Does it gurgle? does the heart drum like thunder?

I’ve been trying to write this post the whole year, and it’s not so much a “year in review” post as it is one that I just ran out of days to write about 2018 so here we are.

The intention of this post, as its title suggests, is to write about what it feels like to be alive in this the year of our lord (Ariana Grande) 2018. To the reader of the future, be it me or be it you, I want to try to express what the mindset of one white Canadian man was, one who lived away, who lived at home, who tried, failed, failed, and finally succeeded at becoming a temporary immigrant (“the Resident Alien”) in the United States.

There are certain things I, the writer, know today that you, the future reader, may not know. The events of this year are fresh in my mind, but they haven’t all become history yet, because the histories just haven’t been written. And who writes them will determine, in part, how you, reader, get to learn about what this year was like. If for example Trumpism and the rampant gutting of the US’s government continue, it’s likely the history books will look favourably upon 2018 as a year of triumph. They shouldn’t. We know there is at least one major investigation going on into Trump’s campaign (“the Mueller Investigation”) but we don’t yet know what it’ll reveal nor do we know if it will matter in the end. It should.

On the other hand, you the future reader will undoubtedly know many things about this year that I just don’t — just can’t yet know. 2019 and years beyond will certainly reveal new truths about this time, leaks of secret meetings, revelations of wrongdoings, and so on. You might even one day have the benefits of clarity. You’ll likely have some kind of view point on this year and this era, some whole (or at least whole-er) perspective on what the living fuck was happening in the world, that I just am not yet privy to.

You may wonder, how did we let this all happen? How did a nation allow itself to be so blatantly abused? How did the rich profit so much? How did a president condone tearing nursing babies from their mothers’ breast? and how did a government not condemn its leader?

I’m hopeful you have more answers than me.

This year has felt like an eternity, somewhere between a slow drip and water torture. It’s been a year of violence — not just of war, shootings, and hate crimes, but psychological violence too. The deluge of scandal, the festering undertow of nastiness and spite, the abstraction of people into “illegals” and “resident aliens” and “caravans,” is a baseball bat to the mind. It shatters our ability to care, to make sense, to object. It is beyond numbing, it is pulverizing.

But.

I try. I think we all do.

It’s 2018 and we’re doing the damn best we can do, those of us lucky enough to do so. In 2018 it’s a struggle, but we’re learning how to be resilient. We’re learning, slowly, how to be less cynical. We’re learning. And we’re coming. (And, those of us who can, are voting).

[To] read a thousand books in my lifetime. I decided to start counting books I’d read since November 14, 2014 (although I’d read many books before this, I really only wanted to start counting then, so I could better catalogue them).

This year was quite a wild ride for me in far more ways than I can or will describe in this blog post, but as a summary I:

Anyway, here are the books Jason loved, hated, and overall read in Year 4 of his reading quest!

Mindstorms, by Seymour Papert.

This was the first book I read when I began my reading challenge in 2014 and this year seemed like a great time to revisit it. Since I’m a different person every time I read a book, it was a neat experience reflecting on how I’d changed since I’d last read it.

Mindstorms remains a remarkable book which you should (re-)read if you haven’t. Don’t get distracted by how Papert talks about computers; get distracted by how Papert talks about children, learning, and powerful ideas. In our shallow, callous pop culture, it’s nice to be reminded of deep, earth shaking ideas and an unfaltering belief in the potential of children.

Seconds, by Bryan Lee O’Malley.

Oh my god was it less than a year ago that I read this??

Speaking Out Louder, by Jack Layton.

Layton is probably Canada’s best Prime Minister who never got to be Prime Minister. This book, written before his untimely death, discusses big ideas for Canada and Canadians. Practical, rational, and hopeful.

Visual Intelligence, by David Hoffman.

Jesus I read this in January of this year?

Dear Data, by Giorgia Lupi and Stefanie Posavec.

Good god it can’t be.

Are We Smart Enough to Know How Smart Animals Are?, by Frans de Waal.

How long was this fucking year?

At Home in the Universe, by Stuart Kauffman.

This book promised some pretty profound things about the underlying systems of the universe, but in my opinion spent a little too long admiring itself in the mirror rather than delivering on its ideas.

Slapstick, by Kurt Vonnegut.

Uproarious! Again I say, everyone should read more KV.

Turtles All the Way Down, by John Green.

Um. I refuse to believe I read this book in February. Nope. It was definitely ten years ago when I read this book.

Hunger Games: Catching Fire, by Suzanne Collins.

Alright real talk: the I really enjoy the Hunger Games books. I love the narrative style, I love all the emotions Katniss goes through, I love the themes it explores, like how the rich profit off the broken backs of the poor, how opulence depends on suffering, how savage entertainment can be.

But 2018 was not the time for me to have read this book.

This book accompanied me to the US border where I was denied entry and separated from my family. This book accompanied me while I watched the US government tear children away from their parents and lock them in cages.

2018 was a year of physical, emotional, and psychological violence. And this book was a little too much for me.

The Dark Tower: The Gunslinger, by Stephen King.

Don’t even.

Cognition in the Wild, by Edwin Hutchins.

Hope you like boats!!

The Demon Haunted World, by Carl Sagan.

God bless the greatest invention of all time: writing, such that we may still be haunted by Carl Sagan’s words decades after he left us.

This book argues for science as the most tested and true antidote to superstition and human….all around dumbness.

Star Wars: Aftermath, by Chuck Wendig.

💯

Jurassic Park, by Michael Crichton.

What I wanted: everything I liked about the movie but diving deeper into the nerdier sciency stuff.

What I got: Everything I wanted.

The Inconvenient Indian, by Thomas King.

Hi I’m a white Canadian who knows relatively nothing about the past + present of my country’s indigenous peoples. King’s book provided an excellent starting point for an area of my country and culture I still know shamefully too little about.

Snot Girl Vol 1 & 2, by Leslie Hung & Bryan Lee O’Malley.

Beautiful and brilliant and juicy and hilarious.

Borne, by Jeff VanderMeer.

Last year I read (but didn’t love) VanderMeer’s Annihilation and was reluctant to read more of his books, but the cover to Borne intrigued me (some sort of sentient plant-baby that grows and learns?? hell yes). It was kind of a blast to read even if my original guess as to what it was about was way off.

It’s hard not to love the plant-baby tho.

Ramshackle, by Alison McCreesh.

Ugh really was this really 2018?

Star Wars: Aftermath 2, by Chuck Wendig.

Having never really read Star Wars books until this summer, I had fun with these two. But it’s made me realize: the universe of Star Wars only really makes sense in movies and doesn’t really translate super well to books.

I think books demand a little more coherence than do movies. In a movie, it’s more or less acceptable to have a “big bad evil government that does bad” and you can suspend disbelief without too much trouble. But in a book? Well, there’s gotta be backstory, there’s gotta be motivation, there’s gotta be more thought to what’s going on. And suddenly, Star Wars kinda falls apart a bit.

But hey, it was still a fun read and couldn’t we all use some more fun reads in our lives?

I Contain Multitudes, by Ed Yong.

You contain anecdotes.

Unflattening, by Nick Sousanis.

Loved this book! Felt like a spiritual successor to Understanding Comics and again reminds me how much unfulfilled potential the comic format has as a tool for explanation.

A Man Without a Country, by Kurt Vonnegut.

Re-read this bad boy on an airplane and christ it’s like jet fuel for your rational brain.

Alone Together, by Sherry Turkle.

Sherry gets me.

Scale, by Geoffrey West.

This was a good book that needed to be scaled to ¾ its actual size.

Authority, by Jeff VanderMeer.

Way better than Annihilation.

Creative Selection, by Ken Kocienda.

Reincarnation Blues, by Michael Poore.

My favourite book of the year, easily. It tells the story of Milo, a man who is given 10000 lives in which to attain “perfection.” It details in many heartwarming, sometimes sad, and always hilarious ways, he lives and dies.

A line in the book made me cry on the subway, so much I had to put the book away.

Then I re-read the line a few minutes later and damnit I cried all over again.

This book made me think about life and death in ways I hadn’t before. Some new things clicked for me. It made me think about time in new ways too. I can’t tell if I’m living my first or my last life, but I’m definitely living one hell of one.

Do you ever get that giddy feeling when you meet someone new who you just really click with? And you think, wow, this person is just fascinating! All you want to do is just hear more.

And then, oh fuck, they go and say something really awful and you cringe, and you think “No! I wanted you to have not said that, I wanted you to be better than that!” Do you know this feeling?

There were moments in this book when I felt like that. I loved this book, mostly, but there’s a chapter where, for no real reason! there’s the trope where a female character makes a false rape accusation against our male protagonist. And then he goes to jail and he suffers in all kinds of terrible ways. And frankly that’s just garbage. It’s a bad trope and it needs to die and I was so upset to read that in this book. I felt hurt by it, I felt like the book betrayed me.

Anyway, read the book, or don’t. I’d understand either way. But suffice it to say, it was a wild ride for me.

Here, by Richard McGuire.

Here’s another non-linear story about time, this one told by a graphic novel. It’s an expanded version of McGuire’s groundbreaking comic of the same name.

What an eerie trip this book was. Reading it, seated in the corner of my own living room, wondering “what has happened in this very room over the past century? What highs and what lows? Who has cried where I’m sitting? Who had happy birthday sung to them? How many puppies and kitties have slept here? What were their names? What was here before this building? What will be centuries from now?”

And how lovely is it that it only takes a few hours to read this book and experience all that for yourself?

The Hand, by Frank R. Wilson.

Sam Zabel and the Magic Pen, by Dylan Horrocks.

Pedagogy of the Oppressed, by Paulo Freire.

Story time.

It’s September 2018. I’m in Powell’s City of Books in Portland, Oregon.

I have a friend who’s a big fan of popular education, a topic I know next to nothing about. But I figure I can find a book about it in this behemoth of a store. I remember the name of this book and its author, head to the education section to find it. I can’t find it.

To a sales clerk I say, I’m looking for this book, I’m not sure that you even have it, could you help me find it? and I show her the name.

She furrows her brow and says “Ooohhh, I know this book. This book is always hiding in the wrong places.” This book is a troublemaker, she says with her tone. This book is up to no good, she says with her eyes.

At last we find the book, this pesky book. This book that’s up to no good. And lo, this book is challenging, but rewarding. And if I’m being completely honest, which I am, it was a damn hard read that I’ll need to revisit when I’m feeling better.

Quick PSA: I’m retiring my portfolio domain whynotfireworks.com, and instead have moved everything over to this current Speed of Light domain. Links on the old domain will automatically re-direct to this domain for the coming while.

Why do this? Well, I realized in the last year or so one major flaw of the “websites are on domains” system is that the more distinct domains you control, the harder it is to maintain them all forever. Since the WWW is a thoughtlessly cobbled together mess of a publishing platform, the onus is on publishers (aka site owners) to ensure domains (and thus publications) stay up. Keeping multiple domains up forever is harder (and more expensive) than keeping just 1 domain.

So, from now on I want to keep as much as possible on a single domain, this one. The old portfolio domain will redirect here for a little while, but eventually it’ll go away (“please update your links” he says to an audience that surely will not). Eventually, I’d like to turn nearthespeedoflight.com into less of a blog and more of a portfolio / homepage-of-me, but for now I’m leaving it alone.

Sooooooooo I go back and forth about blogging. Sometimes I think, “hey I should blog all the time, every day. I should just blog about my day and stuff. And sometimes, write longer essays and whatnot, because so much of today’s writing happens in slacks or twitter and you can’t make any kind of good argument on those!”

But then again I think, hmmm, what if blogging shouldn’t exist? I kind of run with the assumption that because “blogging is dying” that means “blogging must be saved” and maybe that’s a faulty assumption. Maybe blogging was just this thing that happened for a while and that while is now passing.

Also, and this is a big also, I worry that people are inundated with stuff to read and process, and most of the time I don’t think my writing (especially the “this is what I did today” stuff) is worth adding on to those piles. We talk about The Feed but really it’s The Flood. And it’s not that I think my writing is terrible necessarily or anything, just that, I respect people’s attention (or maybe, I fear a global lack of concentration / understanding) such that I don’t want to derail it unless I have something Real Important to talk about, and usually I don’t.

There’s another possible world where I blog, but it’s a kind of lowkey affair, for those who want to check in on me from time to time (aka my friends!). And that’s a nice feeling. Or something like an online diary.

I’m constantly amazed by and enamoured with Jason Kottke, whose eponymous website has been running for nearly 20 years (and it’s his full time job). These days it’s a kind of “what Jason K finds interesting on any given day” but hey, since he’s been running the website for almost 20 years, some of it is kind of an online diary. I found myself looking at his archives for September 2001 today and it’s kind of incredible to read his feelings as the month goes by (I should say, it’s his + the world through his eyes’s feelings).

Another cool aspect of his website is he’s tagged most of his posts, so if you want to find neat stuff about maps or NYC or music or space or whatever, go wild. Years of the web at your disposal. This is kind of what my pinboard has become for me (and for anybody who stumbles across it): click on one of the tags and tumble down the rabbit hole of links I’ve grabbed on the topic.

Which reminds me of something I’ve been internally referring to as “the slow web,” the kind of stuff you find on the web that doesn’t exist in a chronological order (it doesn’t have to be “timeless” or anything, just not something that only makes sense in a timeline). In that way, it’s kind of the anti-blog, or the anti-feed (or the anti-flood). One of the defining factors of blogging or microblogging is that they happen in a chronological manner. You post with some frequency, and then others read your stuff in that order (or the reverse of that order). It means that everything has a kind of sticky nowness, you come to the blog every day to see the new stuff. In that way blogging is a lot less like a book and a lot more like a news program or a tv show or whatever. It’s mostly about now. And now is an OK time, but it’s not the only time. I’m kind of interested in writing things, or making things, that maybe you don’t read constantly. Maybe you find it and you read the whole thing and then you’re done and there’s no incentive to come back unless you want to read the whole thing again. Or maybe it’s something you come to and read a bunch of and then forget about and then one day remember for some reason you can’t quite put your finger on but you’re elated because you really enjoyed it the first time and so you start reading it again but oh boy it’s maybe kind of different this time because there’s new stuff in it. (A wiki is like that).

So anyway, a pinboard is kind of the slow web from a reader’s perspective. It’s fine to follow an RSS feed of it to keep up with what somebody else bookmarks, but there’s also this whole other mode of reading a given tag at random, and doing that doesn’t really depend on time, doesn’t depend on “keeping up on it” or anything like that.

So anyway, that’s what’s on my mind today. I don’t know what to make of it (figuratively or literally). Probably nothing for a while.

Last night my mother asked me if I’d like to join her at an estate auction. Neither of us had ever been before, nor was there anything in particular we were looking to buy, but it seemed like something fun to check out.

We arrived about a half hour before the auction began, so as to get a chance to see all the things on sale. Us and probably a hundred others. It was so strange stepping back and witnessing it all. All of us pouring and pawing over somebody’s old furniture and trinkets. So densely packed were the people and belongings, it was hard to navigate. We were ants crawling over a picnic basket.

We took our seats as the auction began. Having never attended a real one in person, I was curious if the auctioneer was going to sound like those on tv and in movies, or if it would be a more low key affair. He turned out to be a bit of both, like a fast motion WestJet flight attendant (or for those who haven’t flown WestJet: a little mix of playful, sarcastic, and coy).

I like to think of myself as someone quite self aware of when I’m being sold something. I like to stay conscious of how businesses don’t operate out of the kindness of their heart (eg a sale is not a deal unless you were already planning to buy it anyway), I try to recognize when something is a scam, etc. Yet, sitting here in this auction of mostly old furniture, I couldn’t help but think “Oh wow, that really is a good deal” and “Hmmm, well maybe I should place a bid of that” and so on. Of course, this is kind of the point of attending auctions in the first place, so no wonder! but it’s the fact I was trying to avoid that urge and yet the auction made me feel it anyway — very effective.

But my mind kept running back to us all pawing over the old belongings and how strange it felt. What’s strange to me about the auction wasn’t that people were trifling through some dead person’s stuff, it was that that stuff is now separated from the dead person. It’s kind of out of context now and you just see this room full of no-longer-belongings. A lot of objects that kind of make sense when they’re in a house and tended to by a person, but how strange to see all a person’s belongings strewn out in front of all to see, where they don’t belong?

This post might be a bit of a personal ramble because I don’t quite know how to say what I want to say, but I want to say it anyway. I’ve been going through a bit of depression lately and it’s been weighing on me quite a bit. I like to use my blog as a way to think out loud, so I figured I’d take a crack at writing about what I’ve been going through.

Aside from the culturally shared things to be depressed about this year (e.g., the monsters in the US government and the adult voting population who elected them), I’ve personally had a hard time with my move to Ottawa. I miss my old home much more than I anticipated, and feel out of place here (it’s especially weird to be back in my home country but not really feel home).

My friends here, both new and old, have been nothing short of amazing. They’ve been endlessly kind, warm and welcoming. But my out-of-placeness has made me feel kind of withdrawn and lonely, and I feel like I’ve been a bad friend because of it. The irony of this situation is not lost on me.

Lately, everything just feels hard. The days are short and lately, extremely cold (-20°C for the past week). I’ve been sick on and off for the last month. And things I normally enjoy, like reading and working on my side project Beach have become quite a struggle (which of course makes me feel guilty).

Life is far from all bad, thankfully. My partner has been incredibly supportive and helpful; she is my rock. As previously mentioned, my friends here have been so good to me too, which really does make a big difference. I’ve had lots of support from non-local friends and family, too. And I’ve done my best to hold on to the things I enjoy and not be too hard on myself when I struggle, because I know it’ll get better.

I’m looking forward to some things coming up in the Spring. The days are getting slightly longer, the cold will get warmer (…eventually…). I will, I will struggle less with reading and working on my projects as time goes on. It’ll just take time.

Anyway, I hope this was helpful for you, because it was helpful for me.

The following thoughts are filled with spoilers and not necessarily in any order. This isn’t a review per-se but more of a brain dump.

I hadn’t seen any trailers or read really anything about it before going in, so I was going in completely blank slate, which I found to be really enjoyable.

Overall I really liked the movie. For the first time in a long time, this felt like an original Star Wars movie. I worried that it was going to be a refresh of Empire, but thankfully it wasn’t (there may have been some shared traits, but it didn’t feel like a retelling, the way The Force Awakens felt like a re-telling of A New Hope). It was different, and at times a little weird, but I’m thrilled that they told a new Star Wars tale, even if it didn’t work 100% of the time. Risks are good.

The movie was absolutely beautiful. Rian Johnson really did a stand out job on the direction of this movie. Every shot felt well composed, like it could have been a photograph.

The score was great, but on my first viewing, it didn’t seem like it introduced any new musical themes, compared to The Force Awakens (which had Rey’s theme, Kylo’s theme, etc). I did hear the Imperial March though, which was not in The Force Awakens to my knowledge. The music was still moving though, even when it was bringing back familiar scores.

I absolutely loved every moment of Luke and Rey. Luke’s character showed a lot of maturity, like he’d been contemplating the Jedi religion and had fallen out of love with it. He questions it and implores Rey to not fall victim to it. He knows how powerful the Jedi can be and he worries that it’s become too strong of a legend, and that legends mislead people.

Which brings me to the first of two major themes I noticed in this movie: burning down the past. Luke wants to burn down the Jedi order (and with the help of Yoda, literally succeeds at doing so). Kylo wants to burn down the first order and get rid of everything that came before him. Anakin’s lightsaber is discarded off a cliff by Luke and then later ripped apart by Kylo and Rey. There’s this moment where Rose (a fantastic addition!) has one half of a pendant she shared with her recently killed sister (her sister had the other half), and DJ, the thief character wants her pendant as collateral for something, and almost without hesitation she gives him the pendant. She loves her sister and what the pendant symbolizes, but she literally does not want to cling to the past. The Last Jedi is trying to say, the past isn’t sacred, and if anything, it’s holding you back.

I read this theme as kind of a fuck-you to The Force Awakens. Where The Force Awakens was a reboot relishing in its past, The Last Jedi burns it down. In fact, some of the movie almost felt like a fuck-you to JJ Abrams, especially with how Phasma was killed off (Phasma was a character JJ created in reference to….something I can’t quite remember!). She was barely in the movie and killed off quite soon after she appeared.

(Also, thankfully this movie has way way less fan service. Sure, there are lots of things that are from the earlier movies, but only in ways that make sense for the story. There aren’t really any easter egg moments of like “OH! I remember that thing!!“)

Let me get back to Luke and Rey for a bit. For starters, every scene on the island (whose name I’m forgetting) was absolutely stunning. The colour, the scenery, the atmosphere. Sunshine and rain and darkness — talk about balance! It felt almost like a dream. There’s a part where Rey is following Luke, but sees something off in the distance. She heads towards this mystical place — this place is literally in a mist — until she enters the original Jedi temple. Like I said, it feels almost like a dream.

The island felt at the same time deserted and lived in, like a faded memory. The creatures all feel at place on this planet. Although, I will say the part where Luke milks the big creature on the side of the hill was a bit much.

I wish I had got to see a little bit more of their chemistry together. I wanted them to really dig in to what was bothering Luke, and what Rey was hoping to find. I could tell Rey didn’t want to give up and be pushed away, but I wanted to see her dig her feet in more.

Random things:

I loved the new character, Rose! She was sweet and charming and had great chemistry with Finn. I’m glad that she saved him and then she kissed him too. The Force Awakens felt a little devoid of passion at times, so it was nice to see them show some real affection in this one.

I felt like they made Finn a bit less interesting in this movie, and they didn’t give him enough to work with. His chemistry with Rose was good, but beyond that he’s kind of isolated throughout the story. I wish I had seen him interacting with the rest of the characters more.

On that note, pretty much all of the characters were split up in this movie and they way they did it bugged me a little bit. Now, The Empire Strikes Back had Luke + R2D2 split off from Leia, Chewie, and Han, but it worked there because it drove the tension of the movie. Yes they were on two different tracks, but they were pulling each other together by the end of the movie. I didn’t necessarily feel that in The Last Jedi, they just kinda felt split up and when they got back together it didn’t feel as big of a deal. I think the chemistry of the main characters is one part that really shined in The Force Awakens, and we got less of that here.

One neat thing I did enjoy about this movie was that it felt very intimate. For almost the whole movie, most of the plot is happening very close together, with the First Order hot on the tail of the Resistance for the whole movie. Most Star Wars movies feel spread out across the galaxy. A planet here, a planet there. And sure, they do zip from planet to planet at some points during this movie, but the First Order on the tail of the Resistance in space acts as a central hub, a closeness that you just don’t get in the other movies, and that felt really cool.

The moment when Holdo (Laura Dern’s character) jumps her ship to lightspeed aimed at the First Order cruisers was nothing short of astonishing. It was one of the most visually powerful scenes I’ve seen in ages. The audience I was sitting in gasped (and at a second viewing of the movie, a woman in the row behind me whispered “oh my god!” when it happened). Johnson’s directing here really let the audience feel the visual and emotional impact of what just happened, which was so much more powerful than your typical “we just blew up the big bad weapon / monster” explosion you see in most movies today. The destruction of Starkiller Base has nothing on this moment.

(Side note: it’s the geography of the planets in The Force Awakens always felt a little off to me, especially when Starkiller Base destroys the Republic capitol planet. It’s weird because the Resistance base can see the explosion in the sky… so that means that their base planet is really close to the Capitol? which also seems really close to SKB? I could never quite get that straight in my mind.)

Back to chemistry for a minute, the chemistry between Rey and Kylo was super interesting and fun. I genuinely didn’t know what was going to happen and that was exciting. At first it kind of seemed like, oh no Kylo can read Rey’s mind and he’s going to track her down. But it wasn’t like that. It was a bond they shared and they really did reach out for each other (literally and figuratively). This not only continues to make Kylo an interesting and conflicted character, but it makes Rey a conflicted character too.

During the showdown with Snoke (which, Kylo killing him was always a huge surprise for me!), Rey and Kylo fighting together, I got the feeling like, holy shit maybe it’s going to be the two of them on the run from both sides for the rest of the trilogy. Who knows!

And that’s what was so exciting about this movie, is that I was surprised over and over again by it. The movie makes you care about the characters and then makes them act in interesting ways. You can’t quite tell what’s going to happen next but you sure as hell want to know.

Further random things:

The ending: first, after the First Order had blasted down the base doors and Luke walks out to face him, there was this moment where the music was rising, Luke was facing them at sunset, and it just really felt like “Wow, they could just head straight to credits right here and now and leave it as a major cliffhanger.” I had a few seconds of feeling that, but the movie continued.

I didn’t really end up liking the ending shot (the one with the kid who uses the Force to pull the broom to himself, then looks off to the stars). It felt a bit corny to me. But it does help reinforce Luke’s message that the Force is everywhere in the universe, it’s a part of all things and doesn’t belong to anybody. Even this child can have it.

The second major theme I noticed was that of having hope. Hope that things will get better, hope that you can rely on people to help you when you’re in need, hope that things will work out. Maybe it’s just me, but this coupled with the “resistance” felt really timely for present-day Earth where the rise of Trumpism in the United States feels like a hopeless situation.

And who knows. Maybe Trump fans will read this movie’s theme of “burn down the past” more as a nod to the “drain the swamp” ethos. It’s easy for everybody to see themselves represented in the heroes, no matter what their beliefs. Whether or not the themes of this movie were directed at me, I still appreciated them.

Anyway, I loved the movie. There were a few bumps here and there but overall it was interesting and fresh and risky. I can’t wait to see what happens next.

[In 2014] I gave myself a challenge: read a thousand books in my lifetime. I decided to start counting books I’d read since November 14, 2014 (although I’d read many books before this, I really only wanted to start counting then, so I could better catalogue them).

Last year I had a bit of extra reading time on my hands (yay unexpected employment loss!) and read 33 books. This year, I had a bit less reading time but still managed to get through 29 books, which I feel pretty happy about. For those keeping score, I’m now 86 books down out of my 1000 book challenge. Still a ways to go, but I’m really looking forward to breaking the 10% mark this year.

What a year it’s been (I assume, for all of us). Looking over what I’ve read in the last year, I again see some definite themes (because like all humans, I find patterns everywhere and also I was the one who chose the books in the first place, so). We’ve got a bit of a doomsday / dystopia / destruction-via-media theme going on, systems, play, and cities (which this year I’ve connected thanks to what I’ve read) and as always some solid books on learning and education.

This was also the year I feel like I’ve sort of discovered fiction. Of course fiction’s always been great, but I think I haven’t really clicked with it in a long time, most likely because I’ve been reading the wrong-for-me kind, and because I’ve been focusing on a backlog of mostly non-fiction.

What follows is everything I’ve read in the last year, and notes accompanying the standouts.

Watchmen, by Alan Moore, Dave Gibbons, and John Higgins.

Easily one of the best graphic novels I’ve ever read. It’s gritty, sure, but more importantly it explores its characters and world as integrated and complex systems. That, and the just outstanding use of the comic form make this pretty much a masterpiece.

The Story of Your Life, by Ted Chiang.

Bought this short story at the Strand bookstore immediately after seeing (and loving) the film it inspired, Arrival. It goes in a slightly different, but enjoyable, direction than the movie.

The Meaning of the Body, by Mark Johnson.

By one of the authors of Metaphors We Live By, which I read and loved last year. This book contends, roughly, that human meaning is grounded in our physical bodies, with the argument beginning all the way “down” at our physical movement / flexibility.

It was a heady read to say the least, but has given me a new sense when thinking about cognition and bodies (especially when thinking about computer intelligence).

Congratulations, By the Way, by George Saunders.

Bridget Jones’s Diary, by Helen Fielding.

OK. Let’s first take a few minutes to bask in how fantastically laugh-your-arse-off funny this book is from start to finish. It’s good. It’s very good.

And at a deeper level, it’s even better. I think it’s important for men to read this book, not just because it’s enjoyable, but also because it explores the kinds of things our society puts women through (from calorie counting, to self help books, through to male fuckwittage). Yeah the book’s kind of absurd (like all satire), but that’s kind of the point. (I’ll also add I think it’s even better than the fantastic movie that it inspired)

Teaching as a Subversive Activity, by Neil Postman.

Originally read this book in two days after a personal recommendation from one of Postman’s friends and loved it, but decided to re-read a little slower this time.

It’s easy to read this book and think “Jeez, the author sure hates teachers.” but the better way to read is as “Jeez, the author sure loves students.” and I think that’s kind of the point. It introduces the need to develop in children rock-solid crap detectors: children should grow up fully equipped to make meaning about their world and their surroundings, and should be immune to all flavours and aromas of bullshit.

This book has inspired my views on education more than any other book save Mindstorms. Not necessarily the particular views it espouses, but on the dire need for children to grow up as meaning makers, as epistemologists.

Nineteen Eighty-Four, by George Orwell.

Read for no reason in particular.

How to Watch TV News, by Neil Postman.

Read for no reason in particular.

The Systems Bible, by John Gall.

Play Design, by Chaim Gingold.

A thoroughly researched and well written thesis on play and its implications for game design, education, city building, and playgrounds. Completely mind opening.

Harry Potter and the Prisoner of Azkaban, by JK Rowling.

Street Fight, by Janette Sadik-Khan.

This book explores city and traffic design, and the relentless effort required to grow (or slow) it. This is my favourite kind of book, because it makes you see things which were previously invisible to you.

Ghost in the Shell, by Masamune Shirow.

Pokémon Red, by Nintendo.

You caught me. This is not a book but a video game. But you know what, I’ve decided to include some video games in my quest because what is a video game like this if not a story, fleshed out with characters, and exploring themes?

While Pokémon Red is kind of childish at times (duh), it also holds up pretty well after all these years (minus the whole dogfighting thing). It’s a great case study on keeping a learner (player) engaged and feeling confident — yet challenged — basically at all times.

Mindset, by Carol Dweck.

I should have read this book a decade or two ago, but I’m glad I’ve at least read it now.

Sapiens, by Yuval Noah Harari.

A Tale for the Time Being, by Ruth Ozeki.

Proust and the Squid, by Maryanne Wolf.

Fascinating book about the history of reading and writing systems, how the brain reads and how it learns to read (with supreme difficulty), and also explores a bit on what causes struggles for those learning to read.

I absolutely loved this book, and it’s given me a newfound appreciation for reading and fluency.

The One Device, by Brian Merchant.

Landscape as Urbansim, by Charles Waldheim.

Mike Meyers’ Canada, by Mike Myers.

I’m having so many feelings about Canada this year but they’ll have to wait for future blog posts. Very charming book though.

The Death and Life of Great American Cities, by Jane Jacobs.

Stories of Your Life, by Ted Chiang.

Contact, by Carl Sagan.

A few things:

This book shook me to my core, so deeply and so completely it took me a few days to recover after reading it. I loved it.

It’s easily become my favourite fiction book I’ve read, and is a pretty high contender for favourite book, too.

I’ve tried to write a few blog posts about the book and my love for it since reading, but have struggled to put it quite into words.

Picked up a copy of this book at Ottawa’s Black Squirrel Books, a used bookstore + café that’s quickly become one of my favourite places in the world. A used bookstore is nice because it’s kind of like “this is what your community reads.”

You may already be kind of familiar with Contact, as it inspired a movie with the same name and roughly the same story, starring Jodie Foster. It’s a brilliant (and I think, subtly under appreciated) movie, one I’ve enjoyed for many years. Both the book and the movie do a wonderful job conveying their stories, using their medium to the best of its abilities (I don’t think it makes sense to say which is “better” but if you enjoyed the movie, there’s even more to love about the book).

The story is optimistic because it sees the best in its characters, often even its antagonists (you see, a message from a distant star has caused quite a stir in the religious community, but Sagan presents the religious leaders not as brainless deniers of Science, but as people viewing the world through a different lens). It’s optimistic that Science is a guiding philosophy that breaks down international borders and undergirds a deeper human understanding.

You might say it’s a bit “optimistic” in a naive sense, that there’s no way humans can all work together to solve global crises or challenges together, and you might be right. But Contact illustrates what if, what if we maybe could do that? What if the nature of the universe is so profound that we can all rally behind it? Contact asks not simply “Wouldn’t it be nice?” but “Shouldn’t we strive for this?“

And if you’re struggling to find hope these days, what better way to find than to reach out to a universe that surrounds us on all sides, beckoning us?

But What If We’re Wrong?, by Chuck Klosterman.

Annihilation, by Jeff VanderMeer.

Maus 1, by Art Speigelman.

Maus 2, by Art Speigelman.

Maus was a hard read for good reason. It’s a biography of a Holocaust survivor, so naturally some of it is pretty fucking heavy. But the story is beautifully and artfully told (and not just because it’s a graphic novel). Speigelman makes good use of levity throughout the story to calm your nerves as you read it, which I really appreciated.

The Holocaust is never not absolutely, heartwrenchingly shocking to me. It’s hard to grasp the magnitude of it some times, and I think a lot of media does it a disservice (i.e., most movies about it seem to focus on (American) heroism during the war, about “good vs evil,” but rarely are stories told of the institutional antisemitism and other bigotry).

I’ve had this vague notion for the past year or so about how people think, and that how people think has changed over the ages. Not just in general notions of “people think nicer things now” or something like that, but that people think in completely different ways today than in previous times.

When you read something written many years ago (say 50, 100, 500 years ago), it sounds quite different to you than something written today. Part of that is because things in people’s lives have changed (eg we have the internet today but they didn’t 100 years ago), and partly because language has changed (words have new meanings, there are new words, etc), but also partly because the sorts of things people think about have changed (eg there are different political or social events happening at different times).

But I think people aren’t just thinking about different things, I think they’re thinking about things differently, because part of how we think depends upon the things we think about. There are a few things floating around in my head that are giving me / supporting this notion (in no particular order):

the medium is the message (media change the way we think in order to use them; different media mean thinking different thoughts)

metaphors we live by (if metaphor is a fundamental part of our cognition, and if our metaphors change over time (along with language) (I’m not certain that they do but I suspect they do), then mustn’t our cognition change along with our metaphors?)

Kieran Egan’s The Educated Mind (lays out a pretty good argument for modes of thinking across different cultures)

situated cognition (changes over time because the literal physical objects we think with, eg slide rules, change over time — we think with different physical things than we used to, that has to change how we think, doesn’t it?)

Note: I’m not saying that our different thinking is necessarily better thinking, only that it’s different. It might be better, but I’m not asserting that here.

I am, alas, under the influence of the technology of my era and I can’t help but think about the brain + mind as “hardware and software.” In this metaphor, the human brain hasn’t changed a lot recently, but the mind — the software — has changed a great deal, and changes quickly. This isn’t an OS that’s “loaded” but more like one that exists socially, ephemerally, distributed across all people we interact with / are influenced by. It’s a gooey mess of influence; maybe it’s a fog, maybe it’s an ocean with currents.

Re-reading his summary, I noticed it was part of a series of books he read in 2016, many of which were about education. So I decided to read through his posts about education, in hopes I might stumble upon something relevant to what I’m working on.

Some of the earlier posts in that series included pictures of hand-written notes he’d taken. He calls them “brick notes” and I find them kind of fascinating. From what I can tell, he groups together themes and key words from what’s reading, sometimes including page numbers too. To top it off, the notes are all on a single page, which doubles as a bookmark (example).

Techniques people use in their lives are fascinating to me. He’s made his own cognitive tool — a super power of pen and paper — which helps him read and probably helps him write. I’m posting his tool not only for its own sake, but as an example of a kind of tool I find fascinating.

Isn’t it wonderful how many kinds of books there are? Reading through his year of reading, I’m astounded by the number of “topics” that have books written about them (“topics” is in quotes because the idea of giving a definite label or a genre to a book probably limits my thinking about what books can be about (or what books can just be)).

Isn’t the web wonderful? I don’t very often take the time to reflect on what it’s like to mosey about on the internet, but it sure is nice. And it beats the pants off channel surfing (or I guess, Youtube autoplaying videos until you die). I have done one of these before, though.

What’s the computer / software equivalent of a poem? What’s the software equivalent of a poet? Or a software song? Or a software sketch?

These are kind of silly questions, because poems and songs and sketches are so much more than “small versions of bigger things” (a poem, for example, is much more about what it expresses than the fact that it’s usually short).

But still I wonder, can you make a little thing that captures a feeling? That by watching it or by using it or by exploring it, you somehow recreate that feeling? A feeling that’s bigger than just what you see in front of you, bigger maybe than the sum of its parts? Can you express a feeling as a system?

I recently began working on a new app. It’s one part design tool, one part programming environment, and lots more too. But at its core, it’s a medium for creating, thinking with, and understanding complex systems. Of those goals, understanding a system is probably the most important, but murkiest to me.

What does it mean to understand a system? Is it the same thing as “reading” a system? How do you go from not understanding to completely understanding one? What does the threshold look like?

That’s just for a single system, so how do you generalize these principles to all systems? What does it mean to be “fluent” with systems?

I happen to have a few ideas on how to answer these questions which I’ll post in the discussion section below, but I’m curious to hear your answers too. Please feel free to use examples, to link to papers and books I should look at, etc. I’m really curious how you think about this topic.

I decided to try changing up the rules for the discussion system on this website. Previously, replies had to be at least 140 characters long before you could submit. This was done in hopes of discouraging one-off or spammy comments. But, I wonder if it’s been discouraging people from saying anything at all?

Now you can reply with just about anything, so long as it’s 5 characters or longer. You can manage that, though.

You can read all about it (and contact me) on my hiring page. I’d love to help with your project, whether you’re maintaining an existing app or starting fresh. I’ve done it all and can be of great help.

Quick personal announcement time! My wife and I will be moving to Ottawa, Canada at the end of May, and we’re pretty excited. I’ve been living as a Canadian ex-pat in New York City since the start of 2013 and my time here has been nothing short of amazing. However, it’s time for a new chapter.

There are many reasons for the move: best of which is I miss my home country and the slightly chiller lifestyle of Ottawa; least best of which is the current US political climate (and my worries about Canada following suit). But suffice it to say, I’m excited.

If you’re in New York, please reach out and we’ll grab a drink, some food, or just walk and talk the streets. I’m gonna miss all of y’all.

I recently came across a post by Belle Cooper about her experiences at Playgrounds Con, especially with respect to diversity and inclusion at the conference. It’s a great post, and her handling of nuanced issues sets a great bar for me and everyone else in the iOS / Mac community.

Among other things in her post, Belle discussed sexism she noticed among some of the speakers at the conference. As I had given a talk at the conference, I was particularly intrigued to hear what she had to say. She began,

The most frustrating example was a misattribution of a quote by a woman to a man. The speaker in question obviously thought this quote was useful enough to include, so they played a video of a man quoting his female colleague. After the video ended, the Playgrounds speaker attributed the woman’s quote to the man in the video who quoted her.

Quoting and attribution are always important and should be treated very carefully, but it’s especially infuriating to see a quote by a minority misattributed to someone in the majority.

My heart sank out of guilt, because I am the (unnamed) speaker here and Belle is completely correct: I misattributed a quote from Vi Hart to Alan Kay during my talk. This was 100% on me and I’m glad she pointed it out. I try really hard not to do this sort of thing, but in this case I did indeed make a mistake, and regrettably did not attribute the quote to Vi as I had intended.

I’d like to thank Belle again for taking the time to share her experiences at Playgrounds and for calling out the examples of sexism she saw. I’m sorry I misattributed Vi’s quote to Alan, and I’m glad to learn from this mistake. I hope others can learn from it too.

The last day or so I’ve been logged back in to Twitter and am dipping my toes in it again. I’ve been on a twitter hiatus for a few months now (and will likely return to one shortly), because I found the service quite stressful — both in terms of the amount of bile / bad news it showed me, and in terms of “I must constantly refresh it because what if somebody reacted to something I did on it?”

But I can’t say I didn’t miss it at least a little bit. Here are some things I did and didn’t miss.

Did miss

Some familiar faces (or at least, their avatars) and the things they tweet about.

A general sense of “people are around and some of them are listening.” I don’t think Twitter is a great place for “being connected” (though it can give semblance of that), but it is a place for some awareness that others are around. I crave a more intimate version of this, though.

Jamming out on twitter. I like to think I’m “good at tweeting” (if that’s such a thing). It’s debatable if this is a good thing or not, but it’s something that makes me feel happy. I’ve used the service for over a decade and I’ve sorta got the hang of it now. It’s a fun place to play with language.

Similarly, it’s a fun place to riff with people during shared events (like being at a conference, watching an Apple Keynote). This is insufferable to anyone not in on the thing, but if you are, it’s a riot.

Did not miss

All the bile. The hate, the sexism, the bots, the nazis, the trump supporters. The arguing, the fighting, the bad vibes. I don’t want to keep my head in the sand about the Legitimately Bad Shit happening in the world, but I also don’t want to read about it from dawn till dusk.

Dudes. A general profusion of dudes. Look, I know there are many of them out there, and heck, I am one too. There’s nothing inherently wrong with being a dude. It’s just that programmer-twitter is overrun with them and it’s a big big drag.

Related, and worse, is Rational Dude Twitter (it has some overlap with programmer twitter). Rational Dude Twitter is where dudes try to sound so wise by using real big words, academic words, pedantic phrasing. And they flex their big Rational Dude Muscles by squeezing in as many of them as they can into a tweet (or, jesus) a tweet storm. What an utter bummer these people are.

The aforementioned stress caused by needing to feel “on” all the time. Refreshing twitter all day, especially if I’ve just done something on it (what do people think of it?). Trying, and failing, to not care. Closing the tab and then immediately re-opening it. Checking twitter when I wake up, when I stop at a traffic light, when in line at the store, when I’m poopin, when my subway car gets cell service. Certainly not everybody gets as roped in as I do, but I sure do.

TRYING TO FIT STUFF IN A MEASLY 140 CHARACTERS.

Trying to communicate anything of nuance, whatsoever. I’ve tried. It’s really hard. If you try too hard you end up sounding like Rational Dude Twitter where you only speak in maxims and koans.

A horrible, pathetic, embarrassing, offensive use (or disuse) of hypertext and rich media. Twitter is a website that doesn’t let you make web hyperlinks (only auto-links). You can’t bold text (like the Xerox fucking Alto could do 45 years ago). You can’t embed other media (except that which twitter has deemed acceptable). Need to explain something complicated? Screenshot of text or a “gif” (it’s not a gif) of software it is!

Anyway, all of that is to say, I’ve got some feelings on the subject. I’ll write more soon (because embarrassingly enough, this website doesn’t support auto saving yet and I’m worried I’ll accidentally delete this otherwise nice post).

Picking up where I left off…

This is the part where Jason-the-programmer says “And so here’s the technology I’d like to see to improve this” and I’ll rattle off a bunch of features for twitter to do (and they won’t do them) and I’ll feel satisfied.

You can probably guess from my tone I’m not exactly about to do that. But I would like to imagine a bit of an alternative to twitter, which has many of its strengths and fewer of its faults. This is not a 3rd party twitter app, and it’s not an alternative service (like app.net was) but instead is an inkling of a “protocol” for people talking to each other on the web without a shitty service in the middle.

Before I go further, I’ll say this was written hastily, probably has lots of flaws, and is almost certainly already kinda in the works in the form of WebMention and others.

The biggest thing I’m after in my imagined web network is a really solid way for having good relationships with people online. Too often online life presents the artifice for relationships, without providing much in terms of actual relationships. Corporations (it’s always corporations) say they’re trying to make a more “open” or “connected” world. Connections are great, and underrated (hey look, I can contact just about any living human being on the planet, no matter where they are, in a matter of minutes, and if that isn’t absolutely mindbogglingly astonishing, you should take some time and reflect on it), but as far as building relationships go, connectivity is a bare minimum — necessary, but not sufficient.

And I’ll admit, “relationships” are one of the most complicated aspects of the social human being, and I don’t hope to facilitate or foster all or even most aspects of human relationships via my proposed online world, but boy wouldn’t it be great to foster them just a little bit more than we currently do?

So I think my narrow definition of “relationships” here mostly means intimacy. Not in the sense we often think of it (as physical intimacy between people, often sexual), but more so as closeness and trust between people. I want to know what my friends are up to, I want to be able to talk to them (about the big stuff, but also about small talk stuff). I want the opposite of loneliness, and the opposite of loneliness isn’t dozens of people, the opposite of loneliness is togetherness.

Related, I don’t need or want thousands of online friends. I can’t deal with thousands of most things (unless it’s thousands of dollars, and even then my track record is only so-so). I don’t want a platform to grow my brand, I want a place to hang out with my friends online.

When I was a teenager I used to hang out online just about every night. For me, that was MSN Messenger: most of my friends were there, not all at once, but at various times throughout the evening. Yeah, it was mostly a place to gab, but it was also a place where you felt you could confide in those close to you. You had a sense that other people were around, and that you could be together for a little while. People had “statuses” to indicate when they were around. If someone was online, MSN told you so, and you knew you’d have pretty good luck spending some time with them. Likewise, if their status was “Away” or “Busy” or “Offline” you knew they probably weren’t around for hanging out, and that’s OK, because you had the right expectation.

With twitter, I can kind of guess when my friends are around, but I’m not really sure. Maybe he’s up for tweeting back and forth; maybe she just put her phone away because she’s going out tonight. Who can fucking tell?

What I’d really like is a place where me and my friends can hang out online. One that’s on my website, and that’s on your website, and on all your friends’s websites. I cannot and will not trust twitter to do a good job at this, not only because obviously I’m just Jason and they don’t have to listen to me, and even if they did it’s not the product they’re trying to create, and even if it were, they’re mired in the vitriol that resulted from their previous poor design decisions, and on top of all that they’re a corporation that doesn’t really give a shit about my mental wellbeing.

I don’t necessarily want another IM (although hey, if I could recapture they heyday of my MSN years, I wouldn’t turn it down), but I’d love to reintroduce the concept of online status into today’s web networks. It doesn’t have to be straight online status, it could be something like Glancing or it could be an evolution into something altogether new, but I should at least be able to tell when my friends are “around.”

I don’t want to be constrained to 140 characters, as that makes it really hard to talk about just about anything with just about anyone. People are complicated and messy and we need a little bit of breathing room to express that. I’m not saying that everyone should be writing blog posts to each other (necessarily), but holy crap give them the space if they need it.

Maybe this looks like a feed, maybe it looks like IM, maybe it looks like something a little different from that. But this is the sort of thing I want from an online network of people. I want to hang out, I want to be together, online. And I don’t want to be dependent on a corporation for it, either. It doesn’t necessarily have to be private, but it could be.

Could this work? Sorta (probably). Everyone runs their own server (oops, that’s probably game over), and the servers communicate via an API / protocol about new posts, for example. There’s another obscure networking service that works a bit like this, and it’s done alright. I won’t go too far into implementation details beyond saying that “it’s probably possible” because that’s all that matters and because I’ve yet to fully flesh out and design what the service would actually look like.

These are some of my meandering thoughts on Twitter and social life on the internet in 2017, and maybe social life on the internet in 2018 and beyond. What do you think? What do you want from your network?

On February 23 I spoke at the Playgrounds conference for iOS developers in Melbourne, Australia. I spoke about the purpose of education, what programmers can do about it, and how the current “learn to code” movement falls totally short of accomplishing important educational goals.

I intend to provide a more formal, essay version of what I covered in the talk in the coming weeks (famous last words), and I’ll definitely link to the video recording of the talk once it’s published, but in the meantime I wanted to provide a few links to things I talked about as fuel for anyone interested in following up.

Alan Kay

As mentioned in my presentation, my talk was largely inspired by the work of Alan Kay. Indeed, part of me wanted to just hop up on stage, hit play on this talk by Alan, and let the whole thing play out. Alas, I also wanted to provide a bit more context for the Swift / Playgrounds audience, so I decided to speak a bit more in terms of what Apple is up to. Suffice it to say, these ideas are neither mine, nor are they new (personal computers have been seen as potential devices of enlightenment roughly as long as they’ve existed). See also Alan’s paper where he envisioned something like the iPad nearly 50 years ago.

Where do we go from here?

Let’s say you were in the audience of my talk, and you more or less liked what you heard, and you want to participate. What now? It’s a tricky question to answer, but I’ll do my best. (Honestly I was quite unprepared for the largely positive feedback to the talk; I probably should have prepared this sort of answer in advance!)

As mentioned in the talk, the first thing you should do to help is learn. Read books about learning and education (and understand the difference!). Talk to teachers and learners and try to understand what their needs are.

“Learning” is of course an on-going process; not one you start and finish before doing something else (in fact, you’ll often learn a lot by studying plus doing at the same time). Try to make informed inventions. Do you think everyone needs tools to create and reason about complex systems? Try to make such an environment! (Hat tip to the three men from New Zealand, whose names escape me, who chatted with me about Sim City — I could write a whole post about that game, what’s good and what’s bad from a systems learning perspective. For now, google “Alan Kay Sim City” and you should get an interesting discussion about it).

And while you’re learning and building, reach out! There’s a small community of people who research and develop this sort of thing, and the more people working on this, the better. These are hard problems, so the more people working together and collaborating, the more likely any of us will make a positive impact.

Coming Soon

As mentioned earlier, I plan on eventually releasing a more formal, essay version of my talk, with full references to everything I spoke about, hopefully in a less-nervous presentation (getting up in front of 200 people is hard too). I will link to it here when it’s ready.

I’ll also link to the video of the talk when it’s available, too.

Thank you so much to everyone who had encouraging discussions with me about the talk. It truly made all the work worth it.

Like many people in the tech industry, I work in what’s called an open office plan, where everybody works in one large, open space. A big shared space can often be great for collaboration, but at times I find it hard to concentrate.

You know how it goes: it’s easy to focus in the early mornings, but by around 10am, once enough people arrive at the office, things start to get pretty smelly.

Too many people all together in a big open space can get stinky really fast, and that makes it pretty hard to focus on my work. It’s distracting when you’re trying to figure out a tough bug and your nostrils are burning from all the stank wafting over your desk.

Now, I know what you’re thinking: if it’s too smelly in your office, just put something that smells nice up your nose! Like many people I work with, I’ll often do this. Last week I was resting freshly sliced bread on my upper lip, and this week I’ve started shoving cinnamon sticks up my nose. But doesn’t that seem kind of strange?

Don’t get me wrong, I love smelling good things. There’s nothing like the familiar scent of freshly baked cookies to bring me back to my teenage years. You don’t have to tell me how powerful the memory of aromas can be.

But it still feels strange. As much as I love smelling great things, I wish I worked in an office that just wasn’t smelly in the first place, where I wouldn’t have to shove things up my nose just so I could focus on my work. That’d be music to my ears.

Last week my grandmother met Canadian Prime Minister Justin Trudeau, while she played her weekly game of cards with her old lady friends at a church in Fredericton, New Brunswick. Here’s what she said of the meeting:

Oh, we chatted about the weather, the cards, how we were doing, our age, no big politics. Nice friendly chat. We were playing cards when he came in, he came over to our table and asked all kinds of questions about it. Real friendly. Security was unreal. They brought the dog in to sniff all our purses before he arrived, went in the bathrooms before he came in. It was really something.

Made a lot of old people kinda happy today, he did. Lot of young people too. Didn’t tell anyone around that he was coming, it was just our card group, and a few people from church. Didn’t want to attract big crowds, you see.

Big black cars and limousines. It was something out of a movie. Unreal.

Any of these activities are a good way to pass those in-between moments, those crumbs of a day, and get me through to a bigger, meatier morsel of time. They’re a way to kill time, but why would I want to kill time? Time is precious and limited and can never be truly gotten back. I’ll become a wrinkly old sod before I know it, I’d rather not accelerate that plan and miss any of the life on the way. Those crumbs may be tiny, but can be filling when put together.

I want to emphasize I have nothing against Twitter, listening to music, podcasts, or reading. All are excellent tools which serve their own purpose of entertainment, enlightenment, or information. All are important, but turning to them for the sole purpose of killing time seems perverse to me.

The thing I’ve realized recently is it’s easy for me to be “consuming media” from the time I wake up to the time I go to bed, and often I do. My [phone] alarm goes off in the morning, might as well check Instagram, emails, New York Times, maybe RSS too. Have some breakfast and watch some Youtube. Read a book on the train to work. Work 8 hours on a computer. Read a book or my phone on the train home. Watch some TV with supper. Maybe watch a movie or program or read on my computer mostly until bed.

Not every day is like this, but it’s entirely possible I can go whole days constantly glued to something. No one of these things is inherently bad, especially not on its own, but the totality of it adds up to a whole day (or lifetime) where I don’t have a lot of time for my own fucking thoughts.

Lately it feels like my own boredom has become a privilege, but my is it ever wonderful. To realize I could go whole days without any time to just sit and think, what kind of life is that?

It’s been a weird and exhausting couple of days. Friday was Donald Trump’s inauguration and yesterday was the DC Women’s March / protest, which happened in DC but also in just about every city in America + around the world.

My wife and I marched in NYC. We’d had friends over Friday night to make protest signs (and also tacos..yumm). I made two signs, “think critically” and “pence sucks too” (both I was very proud of). My wife made all kinds of signs, like “dissent is our right,” while others did things like “Love trumps hate” and “we deserve better.” It was nice to have company, to feel a togetherness in our home, which is so rare in NYC these days, at least for us.

Yesterday brought the march. There were oodles of people sharing the train with us, all headed to the march, most with their own signs. We’d made extras, just to give out. There was an older lady on the train with us with a Russian doll / Trump sign which was really well made. She said she worked with children and that Trump’s “no puppet!” arguing was at about a 3rd or 4th grade level.

We arrived at Grand Central Terminal among hoards of people, all headed to the march. We waited for some of our friends to show up near a Starbucks on Lexington (maybe?). While we waited we gave out some of our spare signs. I also held my “think critically” sign (this was my first protest, but it seemed like a good idea) for the passersby to see. I got more nods of approval than I expected, which made me feel good. Interestingly, throughout the whole day, I seemed to mostly get nods / compliments from older people (say, over 50). I’m not sure I saw anybody under the age of 30 even react to the sign. Ah well. I tend to find the older residents of NYC fascinating, so I’ll take this as a good sign.

Earlier, on the way to the protest, one lady in Park Slope scoffed at our friend’s sign, which said “Obama Cared.” The lady asked what did Obama ever care about? to which our friend replied “everyone!” It was a mostly peaceful disagreement, but it was still nerve-wracking to me. I generally don’t like confrontation. How lucky are we, I said, that we live in New York, and that pretty much everybody already agrees with our political views. I can’t imagine having to protest in a place where my views were the exception / outlier. That’d be real confrontation.

The march itself was quite powerful. Thousands of people, of all ages, of all kinds, were marching together, slowly. We were packed in between the streets. Marching from Lex (?) @ 48th street, we glacially made our way to 5th ave, then up to 55th street (right before Trump Tower, which was heavily barricaded), over the course of about 3 hours. There were so many signs and chants and songs and people, it’s hard for me to make sense of much of it, but it was tremendously powerful and moving. People were courteous, but also quite riled up. It was a very moving experience.

Just being surrounded by so many people who cared enough to show up was truly touching. Sometimes the world feels hopeless these days, but yesterday showed me loud and clear that the world is not about to take this sitting down. There are people who want to make a difference, and they want to do so by rejecting the bad, and working towards the good. That’s a powerful feeling that goes a long, long way.

The demonstrations in DC drew possibly 3 times as many people than did the inauguration the day before. That’s a powerful statement, and a powerful act of defiance, that will not go unnoticed.

A few years ago a friend and I were hanging out at a dev conference and we were talking to someone generally well known and well respected in our community. The guy was essentially berating my friend about a pretty inconsequential detail of my friend’s app stack, and though my friend made a good defense for their stack, and tried to agree to disagree, the guy kept berating them.

I knew this hurt my friend, to be given shit by somebody generally respected, in front of our peers. I just stood there. I just stood there.

My friend and I talked it over afterwards, and they were OK. But there was a clear moment when I knew I should have said something in the moment, and I didn’t. It was clear as day to me that I should have done something, and I didn’t.

~ ~ ~

I used to work on a team with a very toxic coworker. On more than one occasion, he publicly shamed me in our team’s Slack room. That hurt pretty bad, but what was worse was that nobody, not one of my teammates stood up for me. Some of my peers privately messaged me about it, which was nice and helped, but nobody called out the toxic guy. Not even our tech lead, who I quite looked up to (until then).

It’s bad enough to be bullied, but it’s especially degrading to have people watch and do nothing about it.

~ ~ ~

There have been more moments than I’d like to admit when I saw something wrong, and knew exactly what to do, but instead did nothing. Whether it’s somebody being bullied, or somebody who just needed help in any way, I’m ashamed to say there have been many times when I didn’t stand up. But I’ve been actively working on changing that.

For the past few months, when I see somebody who needs help, I’ve felt it. I’ve felt the voice inside me that knows what to do, but is usually ignored, and I’ve listened to it. It’s oddly difficult, but I’m doing my best to stand up.

I’ve helped people on the street. I’ve twice called out people being assholes at work, telling them and other people around me that their behaviour is not OK, and have sympathized with those who were bullied. I even wrote a post about not being mean. I’m not saying any of this to self-aggrandize, and I obviously don’t need any kudos, because as far as I’m concerned, standing up is just the bare minimum for being a good human being.

Given the current political climate in the world, given America has elected a fascist as its president, I feel like we’re all going to need to be standing up a lot more. We should have been doing it all along.

I hit a point a few months ago when I realized I needed to reset my relationship with “social media.” I had no interest in leaving any of the networks I currently use, but I did need to change their level of importance in my life. I continuously get meaningful value from these products, and some of my closest relationships are the result of them. This isn’t about deleting accounts, this is about reprioritizing, about figuring out how much importance I assign to these services and how I access them. […]

I realized that I needed to be comfortable existing in a moment, in my own skin, alone with my thoughts. Louis C.K. has this great bit about just being a person. I remember seeing this and thinking, “damn, I can’t remember the last time I was just a person.”

Tara’s got some great tips here. I’ve unfortunately never had much luck toning down social network use. For me, it’s always been mostly all-or-nothing, unfortunately. That’s one (of many) reason(s) why I’ve been off twitter for the last month or so, and why I plan on mostly remaining off it indefinitely. The thing I keep reminding myself, though, is that it’s in Twitter / Instagram / Facebook / etc’s interest (and their aim) to keep us glued to our screens as much as possible — i.e., it’s not a personal failure that I’m “addicted” to social media — it’s designed to have me be addicted.

World War Three, By Mistake. Eric Schlosser in a fantastic essay about the US and Russian nuclear systems. More people should read this, and here’s a taste:

What worries me most isn’t the possibility of a cyberattack, a technical glitch, or a misunderstanding starting a nuclear war sometime next week. My greatest concern is the lack of public awareness about this existential threat, the absence of a vigorous public debate about the nuclear-war plans of Russia and the United States, the silent consent to the roughly fifteen thousand nuclear weapons in the world. These machines have been carefully and ingeniously designed to kill us.

Tonight was spent hanging out on my computer (an iMac), doing a couple of things. I wrote some notes in a text editor, browsed the web a bit, collected a few images and goofed off designing a web page in Sketch (all the while listening to music).

I kind of can’t imagine having an evening like this on iOS. Certainly not on an iPhone (because its screen is too small), but I also can’t imagine it on an iPad, even with a physical keyboard. The sort of thing I did tonight had me rapidly bouncing around multiple apps, often using them simultaneously. Browse some images in Safari, drag them into a Dock folder, pick the ones I like from Finder and drop them into my Sketch file. Arrange the images in Sketch; nope, re-arrange them so the big one’s at the top; nope, put it on the bottom; ok, move all the images to the left; good. Pick a layer and change its colour using the eye dropper on one of those images; ah that’s not right, pick the next one; yeah that’s it.

Can you imagine doing anything even remotely like that even on a big ass iPad with a keyboard? I’ve waited for years to see something great like this, like “hanging out” and mucking around on an iOS device, but I’m still waiting.

A lie I keep telling myself is multi-touch is so fantastic. It’s amazing, right? You can use all your fingers (and then some!), to uhm, touch your screen. To do what, I still don’t know. Almost ten years of iOS and about the best multitouch app I can think of is Maps: it’s got two-finger-gestures!

After all this time, after all this waiting and lying to myself, I think multi-touch has been a big red herring. I’ve always looked at it and seen potential, like, this is the year of the multitouch desktop but it’s never materialized. iOS has always felt incredibly stunted to me, but I kept telling myself, we just need time to re-imagine software, we’re all just stuck in the desktop mindset, it’ll come.

I don’t think it’s coming.

At its finest, I think iOS is a fantastic context sensitive information graphics system (as discussed in Bret Victor’s Magic Ink essay). It’s always with you, it’s location-aware, and it’s usually got an internet connection. Mix all this up with a zippy processor, and you can get a lot of graphical bang with very little interaction buck.

I almost wish the iPhone didn’t have any input at all. I wish it was just a big screen (with network, GPS, etc). That’d force app makers to make honest-to-god context-aware software. It’d show you relevant information, without expecting you to poke and prod with your fingers. It’d be powered by all kinds of data it currently has, but wastes. Those emails you got about a housewarming party would power the device to show you Maps locations when needed, calendar events when needed, shopping options when needed.

It wouldn’t have to pretend to act like a desktop computer, it wouldn’t promise to replace it. It wouldn’t be a “consumption” device, but it would be an information device.

Of course I’m exaggerating; you’d still need some input to OK things, sometimes type things, etc. This is an exercise of the imagination more than anything. Yes, there are new sorts of things you can create on an iPhone, but much of it feels imprecise or ready-made. Yes, you can squeeze creativity out of just about anything, but that doesn’t mean it’s tailored for creating.

Back to my futzing around tonight: so what? I think what I’m trying to say is I’m realizing iOS is not nearly as exciting as I used to tell myself it was. There are neat personal-information related things you could do with it, but beyond that, what kind of computing can I do with it? All of my thoughts for making new software has been bottlenecked by “well, phones + tablets are the future, so how do I make this work on a phone?” and I think it’s time I stop asking myself that question.

I don’t think a keyboard, mouse, and 27 inch rectangle are the future of computing either, but I think they’re better than fingers on glass. For now, that’s where I think I want my computing, my designing, to be. iOS may prove me wrong, but I’m not holding my breath.

If you’ve ever criticised leadership in any way, you know how hard it can be to move a muscle against an invisible wall, endlessly high and dispassionately immense. When something seems wrong enough that you, a single tiny person in a big world machine, feel moved to action, you start doubting yourself. If this is so horrible, why hasn’t someone else already done something about it? Surely this would never be allowed to happen? When you open your mouth to tentatively voice your concerns everyone is suspiciously quick to violently agree. They already know it. Dysfunction is obvious. Action is hard.

It’s fucked up that being interested in this random programming language, not even for the reasons the fangirls love it, suddenly caused everyone to start being nice to me when I’m in fact the same trash can that I’ve been all along. Coming upon the Correct Signal by accident made it all feel extra wrong and extra strange, like I killed a man and wore his skin for a suit and suddenly inherited all the achievements ever made in that body.

For years, Facebook’s headquarters in Menlo Park featured a rectangular sign that reflected the ambition and spirit of Mark Zuckerberg and his legions of dedicated employees. It read, in bold, red lettering, “Move Fast and Break Things.” Twitter had a similar poster that hung in its San Francisco office, noting “Let’s Make Better Mistakes Tomorrow.” These mantras aren’t an anomaly in Silicon Valley’s playground-like campuses. Cubicles, hallways, cafeterias, and meeting rooms are festooned with Rockne-esque white-board-style slogans such as “Done Is Better Than Perfect” or “Fortune Favors the Bold,” or “Don’t Bury Your Failures, Let Them Inspire You.”

These maxims have their value, and they have helped inspire a wealth-generation machine unlike any other in human history. But moving too fast can come with consequences, especially when the mantra is heeded by young people who are often still in their 20s and 30s. In fact, the tech industry’s adherence to an ideology of rapid acceleration helps explain why America finds itself in its current predicament, with hackers reportedly involved in swaying our election and a growing acceptance of xenophobia spreading across the nation. […]

If the tech elite had no idea how their innocent products could be undermined, then now is their opportunity to pause and think about the implications of their actions on the future. As companies in Silicon Valley build robots that can run as fast as a cheetah, fleets of cars and trucks that can drive themselves, artificial intelligence agents that can predict weather patterns and respond to global market changes, and flocks of drones that will deliver our packages, maybe it’s time to put more effort into thinking about how to avoid calamitous events from occurring on a larger scale. It’s one thing for Russian agents to hack our e-mails and influence the election. Imagine what they could do with millions of autonomous vehicles that have passengers inside them.

Cynically, however, one wonders if tech companies were subverted not because they couldn’t imagine such dystopian outcomes but rather because they weren’t incentivized to prevent them. As Trump’s campaign noted recently, it spent $50 million in digital advertising and promotions during the election, with a majority of that expenditure going toward social media. Why would anyone in tech want to fix that? Financially speaking, Facebook’s plan to combat fake news could indeed backfire. (emphasis mine)

I have a few thoughts here. First, it’s clear that often times our systems can get away from us. Something that might work at 1x scale might crumble at 1000x scale, and if you don’t have balancing parts of your system, there’s no way to stop it. Beyond that, it’s hard (or useless) to put the pin back in the grenade. Please also read Alan Kay’s essay on the subject.

It can happen to just about any system if we’re not careful or don’t balance for it. Democracy, capitalism, social networks + the internet, the environment, political movements, etc (and bonus points for realizing that of course, none of these systems are isolated; everything is intertwingled, as they say).

This deserves its own post, but I have to at least attempt to say it here: we in the technology industry are presently complicit in the damage it does to the world. It’s not a “we might become complicit in a potential future where we’re asked to create a database of, say, Muslims in America.” We are currently complicit in the damage it does, right now. We could all use a reminder that “disruption” historically has not been a great thing.

I’m looking through Scott Pilgrim creator Bryan Lee O’Mally’s tumblr and a lot of what he posts are pictures from his sketchbook. I wish, as a programmer, I could have a sketchbook. I can sorta do it. A paper book lets me write my thoughts down, and I can doodle in it too (if I was any good at doodling!). Or I could blog it and let other people read / see it.

But I can’t really sketch out programs in this way. I can’t keep a running diary of code I write. The closest thing I can do is keep a running sketchblog with pictures (or videos) of what I’m working on. I can’t actively share the stuff (at least not in a website format). I guess I could theoretically make an app, but then all my stuff has to become appy (and if it’s an iOS app, then it has to be touch-related). Programs are so tricky, because not only do they have to execute (and usually can only execute in one place / platform), they also often involve affordances (that is, they often have to be used, which implies ways to interface with the program — touch? mouse? keyboard? stylus?). How can that be encompassed in a sketchbook?

~ ~ ~

Semi-related, while reading his tumblr, I’m saving a bunch of the images I see and like. Part of me wants to print those all out, I want to be surrounded by them, and I want to draw, too. But usually I also want to share the stuff I’m currently into. That’s probably because I’ve been wrecked by social networks for so long. I feel kind of conditioned to want to share all this stuff.

What I really want is twofold: 1. I want to make great things and tell people about them and 2. I want to see what my friends are up to (what rad shit are they working on?). Through these, I’ll learn new things and meet new people, too.

~ ~ ~

Mainly, I love the idea of these sketchbooks, but so much of programming is invisible. I don’t really care to show or see code, I want to share and see sketches of programs. Those don’t really exist today. Programming hasn’t yet entered an era where we can sketch (which doesn’t necessarily mean “programming with a stylus” (though it could) so much as it means “rapidly creating rough versions of program ideas”).

Twitter has retweets. Facebook has sharing. But Instagram has no built-in reposting. On Instagram, there’s no instantaneous way to share someone else’s post to all of your followers. […]

When you have to put a little work into posting, you take it more seriously. I wonder if fake news would have spread so quickly on Facebook if it was a little more difficult to share an article before you’ve read more than the headline. […]

Instagram was no accident. The only question: was it unique to photos, or can the same quality be applied to microblogging?

I don’t think it’s unique to photos, thankfully! As Manton describes, conscious decisions by Instagram encouraged certain behaviours above others, and I think you can do that no matter what your social network’s primary content is. Let’s look at this in a bit more detail.

First, it’s important to get to the real meat of what’s going on when we talk about “fake news.” This is distinct from just inaccurate information (although that’s a part of it). What’s going on is really disinformation, better known as propaganda.

Aside from the normal reasons propaganda exists, it exists on social networks like Facebook and Twitter because it can exist on those networks. It’s profitable and useful for the parties manufacturing and disseminating it. To Facebook and Twitter, upon whose networks it propagates, it doesn’t really matter what the information is so long as it engages users. Facebook’s apathy to propaganda is regularly exploited.

Design Around It

So how could Facebook, Twitter, or a microblog network prevent it? The obvious first step is to use the tools which already work.

Facebook prohibits nudity on its platform and seems to have tools to defend against it (some combination of user flagging and automated tools). It should do the same for propaganda. Yes, it’s harder to recognize than nudity, but that’s not an excuse for not doing it. This is a starting point.

The next step is to design the interface to prevent it.

Maybe don’t let users retweet / share something with a link in it if they haven’t actually, you know, clicked the link. I bet this would be an easy win at curbing the spread of propaganda.

For anything that gets past that, give users tools to help them reason about the content they’re seeing. Do people routinely report this content as fake / propaganda? Show it. Who is sharing this and how often do they share propaganda? Show it.

Get readers to evaluate what they’re seeing and sharing. You read it, what did you think about it? Why? Let other readers evaluate those answers.

Your user interface can encourage thoughtfulness or it can encourage mindlessness. But that is choice you make when designing your interface.

See also:

Eric Meyer’s fantastic xoxo talk about media and their lack of neutrality. He’s also co-written (along with Sara Watcher-Boettcher) a book on the subject.

[M]ostly writing about iOS, JS and Ruby development: snippets, walkthroughs, tips and tricks, stuff that I struggled with and links to interesting stuff I find around the web. From time to time I will find an interesting or helpful app and I will write about that, as well.

For instance, one of his recent posts is about Slightly easier Core Data manipulation in Swift, showing how you can leverage Swift protocols and their powerful, but mysterious associatedType functionality.

It doesn’t hurt that his site is beautifully designed (especially for code snippets, which is hard to get right!) and loads lickety split (an area my website could learn from 😅).

Thanks for taking me up on my offer, and thank you for sharing, Roland!

There’s this metaphor I heard a few years ago I really liked. It describes the human mind in two parts: an elephant and a human riding atop it. The elephant in this metaphor represents your emotions and the generally “animal” part of your brain; the human represents your rational self. The human rider’s control is at the mercy of the large and powerful elephant it rides atop. The rider can suggest, but the elephant is going to go where it wants to go.

The metaphor comes from Jonathan Haidt’sbook, which admittedly I have not read, so it’s possible I’m misinterpreting it. But I like it because it helps me understand what goes on in my own head (for example, I have a hard time focusing when I’m hungry! the elephant gets what it wants!), and it helps me understand what’s going on in the heads of other people. We want to be rational and sensible, but our emotions often get the best of us.

~ ~ ~

It’s funny to me, framed by the above metaphor, that the United States’s Republican Party uses an elephant for its mascot. The two aren’t logically connected, of course, because the symbols are arbitrary (if you want to interpret the symbols literally, consider also the Democratic Party uses an ass), but it’s funny to me nonetheless.

In the 2016 election, it feels like the elephant got the best of the rider. I’m not implying merely having conservative political views means you’re irrational or at the whim of your animal brain, but I am saying many people voted out of fear above anything else.

~ ~ ~

Last week my wife and I visited her parents for American Thanksgiving. Her parents generally fall on the conservative side of the political spectrum, but it seemed as though they too were unhappy about how the election had gone.

There was an elephant in the room that I both wanted and didn’t want to talk about. I think every one of us felt it. Any mention of politics was quickly met with silent, downtrodden eyes. A game of “jumbling towers” (a knockoff of Jenga) prompted me to make a joke, “jeez this thing looks as rickety as one of Trump’s towers,” that led to short, nervous laughter from all at the table. But there remains an elephant in the room.

Just what it says on the tin. It’s easy to start a blog. There’s Tumblr and Wordpress and, I mean I guess Medium too. Those are hosted for you, but you can also do something like Squarespace or you can fully host your own (but that’s slightly more work and not my point).

So, it’s really easy to start a blog. Get an account, and start writing!

While it’s easy to start a blog, and while it’s easy to start writing posts, it’s definitely hard to finish them. Sometimes you get a hint of an idea, but don’t know how to see it through. Sometimes there’s a lot you want to say, but can’t find the words. Sometimes it just feels like your thoughts aren’t polished enough. I have been through each and everyone one of these, and they stink!

My best suggestion is really just to publish anyway. Can’t think of a great way to end a post? Don’t! Just end it. Maybe say “And I don’t really have a conclusion here, but yeah. bye” I think the reluctance stems from looking at blogs as a publishing medium, which it very much can be if you want. But I think the idea that blog posts have to be polished holds blogs back. While I encourage everyone to research, link, and polish posts to the best of their ability, I also think it’s fine to go without. Just be explicit, “Hey these are rough thoughts” or “these are just my opinions, they might not hold up to scrutiny,” and I think you’ll be OK.

The other difficulty is finding momentum. This one’s hard too. Maybe you’re fired up to write one or two posts, but maybe you lose steam after that.

I’ll start by saying, if you feel like you only have two posts in you, then at least post those two! Start somewhere. There’s really nothing inherently wrong with a two post blog anyway. Just publish them.

Beyond that, reflect on where your original posts came from. Why did you write them? Me, I write posts when I’ve had an idea buzzing in my head for a while and I want to explore it publicly. Or, I’ll write when I want to respond to something I’ve read or somebody else’s blog post. Ideas and thoughts that don’t fit in a tweet. If I start a twitter thread with myself, that’s usually a good indication I should be blogging it instead.

The more you do this, the more you post, the more you’ll want to post. The more people who post, the bigger the network effects. I started blogging because I saw John Gruber and Ash Furrow doing it and I thought to myself “I want to do that. I can do that.” and then I did it. When I see my friends blogging about stuff, I want to join the conversation.

If you’re reading this post, you’re obviously a very intelligent person. You probably have great ideas of your own. Great ideas that are done a disservice by trying to squeeze them into a tweet or erratically chatting them in a Slack room. Great ideas that would otherwise be lost to the sands of internet time, reduced to 404s when Twitter eventually shuts down.

I Will Personally Help You

Here’s the deal: you start a blog, and you tell me about it (either in the discussion section below, or by contacting me otherwise), and I’ll promote your blog.

I’ll make a post on my blog, linking to yours, and I’ll write something nice about it. If you’re having trouble and you want help, I’ll help you. I’ll review drafts, I’ll suggest ideas, I’ll listen, I’ll link to your posts again if you want.

There are so many people discussing important issues these days, but unfortunately so much of that gets lost on Twitter or Slack or other shitty networks. Today, it’s easy to start a blog somewhere (or better yet, host your own), and you should do it.

Oxford English Dictionaries made “post-truth” the word of the year this year. Post-truth. An era where facts, where truth, is irrelevant! Not just about misinformation or a lack of facts, but a bold-faced denial of facts. Staring the truth straight in the eye and ignoring it. That Donald Trump can so constantly lie to everyone, on camera, when he is provably wrong, and when that just doesn’t matter, not even a little bit. I take sardonic comfort in feeling he’ll fuck over everyone who voted for him, that he’ll continue to lie to them, that he won’t help them a bit, and that they’ll see him for what he is. Yet at the same time, I’m starting to doubt that will matter. Why the fuck should it matter at this point?

So he’ll lie to his people as he fucks them over. As the coasts sink further into the mire, we’ll be told how climate change continues to be a myth. And nobody will stop it because there is nobody left who cares about truth.

Of course that’s an exaggeration, but one I never thought I’d have to make. One that had never occurred to me. Truth, knowledge, rights, progress: these were all a straight arrow as certain as the passage of time. But this edifice has been shaken for me recently. Perhaps it was naive of me, perhaps I never should have assumed that was the case. But it’s clear that progress is undoable.

So how the hell do we dig ourselves out of this hole? How do we get people to value truth, to seek it out, to refute blather and bullshit and fictions?

I’d start by looking at how we got here, but I don’t know what to do after that. Amusing Ourselves to Death is my go-to reference here. About how television dramatically altered public discourse in the United States. About how politics is done a disservice by news soundbites. But there’s also Brave New World, where the people were so entertained they don’t need to care about anything else (Postman frequently alludes to BNW in his book).

Television yes. And also social networks like Facebook and Twitter. They’re a gaping hole, a bright red target for exploitation. Facebook and Twitter value “engagement” (a euphemism for exploitation), they don’t value or care about truth. They don’t care about progress. They care about people spending time on their software, seeing ads. (And true, this is just kind of capitalism 101… it doesn’t value progress or any other kind of good, it only values capital) And with phones and social networks, it’s run totally fucking amok.

I wish I had more of an answer here, or at least more of a point. But this is what I’ve got for now.

A coworker asks me as we ride the elevator “How’s it going?” and I say “Well, some days are better than others.” Like the elevator, I’ve been having my ups and downs lately. “Yeah, me too” says my coworker.

The US 2016 election results have left me feeling more emotional than I thought possible. I’ve been sad, I’ve been worried. I’ve been energized and invigorated. And I’ve been angry and scared. I can’t quite make heads or tails of what’s going on most days.

The other day I felt nearly paralyzed by everything. After reading my zillionth tweet about the election, I started freaking out. I felt like all the hope had been drained out of me and I couldn’t focus on anything but that. I was at work, but I couldn’t focus on anything I was doing. All I wanted to do was go home and curl up in a ball and cry.

But despair is not useful. Despair is paralysis, and there’s work to be done.

The quote is about climate change, but it rings so true to the world right now. America has just elected a fascist, and that needs to be opposed and fought at every step of the way.

There’s work to be done, yes, but I’m absolutely still in the grieving stage. I’m still in the I can’t get out of bed today stage, and I think for a little while, that’s how it’s going to be. I’m going to have my up days, and I’m going to have my down days.

Today I’m happy to announce I’ve added a discussions section to the website, directly below each article. Here you’ll be able to directly respond to what you’ve just read, share your thoughts, and have a discussion with other readers of my site. Today’s post is going to take a bit of a look inside why I’m doing this and how discussions work.

Why?

For many years, the blogging community I’m a part of (especially the Apple blogging community) has more or less subscribed to the “we just don’t do comments” line. Primarily, big names like John Gruber (who many of us copied) decided not to have comments, and so many of us decided not to too.

And I think these are mostly fair and valid arguments. Any author is entitled to what they do or don’t want on their own website. Comments often devolve into messy arguments, and it’s much easier to just tell people to comment on Hacker News or Twitter instead.

But I feel like I’ve been brainwashed by that party line, that “we just don’t do comments” and that’s held me back from even considering adding them to my website. For a website the size and popularity of Daring Fireball, it’d probably be madness to foster any kind of coherent conversation. But for a website the size of mine, it’s a different story. So let’s consider why I might want to add them, instead.

Primarily, it’s about having a conversation with my readers, a conversation that I just currently don’t feel happening these days. Earlier this year, I wrote:

When I started my website in 2010, I was really excited to jump in to writing on the web. There were blog conversations all over the place: Somebody would post something, then other blogs would react to it, adding their own thoughts, then the original poster would link to those reactions and respond likewise, etc. It became a whole conversation and I couldn’t wait to participate.

But I’ve never really had much of a conversation on my website. I’ve reacted to others’ posts, but I’ve never felt it reciprocated. I never felt like I was talking with anyone or anyone’s website, but more like I was spewing words out into the void. Some people definitely enjoy what I write, some agree and some even disagree with it, but the feedback has always been private, there’s never been much public conversation.

My readers are ridiculously smart and I respect the hell out of them. They have great insights, they share all kinds of connections to the things I write, and they often challenge my thinking for the better. But many of them don’t keep blogs of their own, or if they do, there’s never any cross-blog-conversation.

The “conversation” ends up on Twitter, which is a horrible medium for it. Twitter’s critical flaw is, of course, it’s comically small post length limit. It’s really hard to have a thoughtful discussion 140 characters at a time. This is compounded by its terribly reply threading and its complete lack of formatting. It’s 2016 and this is the place for conversation on the web? Fuck that.

So instead, I’m adding my own space for conversations.

Discussions

First and foremost, I’m referring to this space as a discussions section, not a comments section. While technically they’re essentially the same thing, by calling it a discussions section, I hope to foster the idea it’s a place for having meaningful conversation with me and other readers. A “comments” section to me implies more one-off drive-by replies that are more about the commenter than they are about the discussion itself.

Secondly, while Twitter, Hacker News, etc allow for minimal-to-no formatting options, this discussion system uses a rich text editor. You can make inline links, bold and italicize text, insert images, use lists and quotes, etc. Essentially I want to give readers writing tools to help them actually make decent conversation. It’s so frustrating that our popular tools for conversing, in 2016, are so damn neutered. Discussions here are still only HTML under the hood, but it’s a lot better than plain text.

Third, everything in the discussion section’s got to be more than 140 characters. I’m setting this bare minimum because I think it’s difficult (not impossible, but difficult) to have meaningful conversation in anything less. It has the added benefit of making one-word smart-ass posts impossible.

Great kinds of replies might include (but not limited to):

Related points the original post made you think of (related topics, articles, books, etc)

Counter-points (do you disagree with something in the post? explain your perspective)

A finer discussion about the original post (asking for clarification, perhaps)

Replies to other people who have participated in the discussion (for any of the same reasons as apply to the original post)

Other than that, they’re basically your run of the mill discussion system. Individual replies have permalinks and time stamps and avatars (which use Gravatar). Each post has a flag link on it, so if you see something objectionable, you can let me know.

Signing up and logging in are the same thing. When you post for the first time, I’ll send you an email asking you to confirm. Once you do that, your post will be visible. This way, I don’t have to keep any passwords.

Most importantly, I’ve got discussion guidelines which I ask you to follow. I want to keep these discussions going constructively, and I hope you do too.

Let’s Discuss

I hope you enjoy using the discussion section as much as I’ve enjoyed making it. There’s still lots to be done, but it should be mostly solid by now. Please let me know of any bugs you encounter (other than slow page loads; I’m working on that).

Anyway, is this a good idea? Are there better ways to foster discussions that I’m missing? I’m happy to say, you can now let me know below.

A year ago I gave myself a challenge: read a thousand books in my lifetime. I decided to start counting books I’d read since November 14, 2014 (although I’d read many books before this, I really only wanted to start counting then, so I could better catalogue them).

Last year I managed to get through 24, which I was quite happy with. I ended up with a little more reading time on my hands this year and managed to get through 33, which has me happily surprised! Still going to be a long haul from here, but I’m more than 1/20th of the way done and have some good strategies for reading a lot of books.

Looking over the list of books, I don’t know that this year really had a theme, but I do see some common threads. I read a lot about systems and systems thinking, and a bit about hypertext systems. I read a bit about language, reading, and metaphor. I read a bit about corporations, what they’ve done to our planet, and how to shame them. And I’ve read a few more graphic novels; I’m really enjoying all that medium has to offer.

Finally, I realized in my first year the overwhelming majority of the books I’d read were written by men. This year I made a conscious effort to read more books written by women (13/33), and in the coming year I want read even more voices.

Below are the books I’ve read in the last year, along with notes for a few of the standouts.

The Man Who Mistook his Wife for a Hat by Oliver Sachs.

Seconds by Brian Lee O’Mally.

Beautiful and funny graphic novel from the creator of the Scott Pilgrim series. Plus, it’s Canadian!

Drama by Raina Telgemeier.

Dragon Ball Vol 1 by Akira Toriyama.

Economix by Dan Burr.

I knew almost nothing about the American / global economic system worked before reading this graphic novel, but I found it a gentle introduction for people like me.

Thinking in Systems by Donella Meadows.

I wish I’d read this book earlier in life! It revealed to me a mental framework (and notation) for thinking about the world in systems. I had the vague notion that “systems are everywhere,” but this book really opened my eyes to what that means in practice. If you care about systems (education, politics, economics, oppression, biological, etc) then you should read this book. I can’t wait to read it again.

Harry Potter and the Chamber of Secrets by JK Rowling.

My wife and I have been (slowly) reading the Harry Potter series out loud to one another, which is nothing short of magical.

Turtles, Termites, and Traffic Jams by Mitch Resnick.

Another book about systems! This time, about programming decentralized systems in a Logo-like programming language. Resnick shows how many complex systems emerge from simple parts, with no central control.

Memory Machines by Belinda Barnet.

A must read book on the history of hypertext.

The ABCs of Bauhaus by Ellen Lupton.

The Word Exchange by Alena Graedon.

My favourite novel I read this year. Graedon describes a noir-semi-dystopia New York City where a “word-flu” has infected the device-using population, causing aphasia in speakers, and literally erasing words from the dictionary. It’s beautifully written and a real fun read.

Metaphors We Live By by George Lakoff.

This book opened my eyes (metaphor) to the way we create (metaphor), share (metaphor), and explore (metaphor) meaning and understanding. This book demands (metaphor) a re-read.

Spelunky by Derek Yu.

Bootstrapping by Thierry Bardini.

What we see when we read by Peter Mendelsund.

Throwing Rocks at the Google Bus by Douglas Rushkov.

If This Isn’t Nice, What Is? by Kurt Vonnegut.

Everyone should read more Kurt Vonnegut.

The Corporation by Joel Bakan.

Terrifying, eye-opening look at corporate structure and its deleterious effects on our planet. Enraging that we, as a people, allow this to happen.

Dragon Ball Vol 2 by Akira Toriyama.

Is Shame Necessary? by Jennifer Jacquet.

Norwegian Wood by Haruki Murakami.

Our Choice by Al Gore.

Amusing Ourselves to Death by Neil Postman.

Everyone who works in media (and if you’re a software designer or developer, you work in media) should read this book every single year.

It’s been a devastating, gloomy sad week. Not only did Hillary Clinton lose the 2016 United States Presidential Election, but Donald Trump also won it. There are many in the United States who have new reasons to fear, for now the country has elected a man who manifests and normalizes hate.

On Wednesday, the day after the election, my wife and I were in a sad shock. She had an idea: “Let’s invite some friends over for dinner. We’ll commiserate and I’ll glue them back together with cheesy lasagna.” So that’s just what we did. (It doesn’t hurt that my wife makes the best lasagna)

We decided to call it “Hopesgiving.” Where in Thanksgiving you say what you’re thankful for, in Hopesgiving you say what you’re hopeful for. We shared food, wine, fears and tears, but most importantly we shared hope.

We talked about our grief, we talked about how this election has been a wake up call, especially now that the results are in. We talked about how we wanted to fight all the nastiness and hate, even if we don’t know exactly what to do yet.

Just having some friends in our home helped immensely. I think a sense of togetherness is what we really needed this week. We talked and cried and shared stories about a time when each of us had embarrassingly peed our pants as children (hey, it happens! and it’s kind of funny, looking back). It may sound silly, but sometimes it’s just good to cry with friends. And when it hurts too much to cry, it’s good to laugh with them too.

So if you’re having a hard time this week, whether you live in the United States or elsewhere, consider having an evening of Hopesgiving. Gather those close to you and share food or drinks or board games or whatever you need. Find some togetherness and find some hope.

A few weeks ago I saw something that made me sad: Craig Hockenberry, a Cocoa developer I once looked up to, tweeted this mean thing:

My new approach to dealing with uninvited contact:

Put yourself in Bennett’s shoes for a moment. How do you think he would feel getting an email like this? When I was starting as an iOS developer, I looked up to people like Craig. He was well known in the community, had lots of great experience under his belt, and seemed like someone you could learn a lot from. If I had sent him an unsolicited email asking about Cocoa dev, and he’d replied with something like this (and then tweeted it!), that would have absolutely devastated me.

I don’t know the all the context behind this tweet. Maybe this Bennett character is a real asshole, but that’s not really revealed in Craig’s tweet. What’s revealed here is Craig proudly sharing his mean response.

If you get a lot of unsolicited email, I imagine that’s super annoying, but it’s mean to respond like this, and it’s meaner still to publicly shame the poor guy. All Craig needed to do here was not reply.

Worse than being mean, this is sharing the meanness with everyone who follows him. I was very sad to see Dave Verwer link to it at the bottom of iOS Dev Weekly, sharing it with further more people.

And finally…

If you see this meanness shared and celebrated on Twitter or Slack or elsewhere, please stand up against it. Put yourself in the shoes of other people and try to imagine how they might read it. If you were new to iOS dev (or any community where this happens), how would this make you feel? Would you want to be the person laughing at the meanness, or would you want to be the person stopping it?

For a long time growing up I had this weird belief that if something was on a menu at a restaurant, it must be good for you. “They” wouldn’t let something be on a menu if it was bad for you. There are rules and laws designed to keep us healthy and safe. Growing up, I’d never really given it a whole lot of thought, but it was a comforting belief and it seemed reasonable.

Of course, it doesn’t hold up to any scrutiny and it’s not true. There’s all kinds of unhealthy garbage on menus. There is nothing really inherent in restaurant menus that forces them to give you choices that won’t eventually kill you. There are definitely some rules about what can and can’t be served, and there are plenty of attempts at limiting unhealthy choices, but by and (often) large, there are no built-in protections for you.

At some level, I think I held this belief about more than just restaurant menus. “Of course I don’t need to wear a seatbelt in cabs, because taxi drivers are professionals.” “Of course this book is going to be accurate, because they let it be published.” “Of course the doctor will do a good job, because they have an advanced degree.” Nevermind that people make mistakes and errors all the time!

The underlying principle, maybe, was I thought because there’s a way that these things could be made safe or healthy or somehow ideal, that of course they must be, too. What kind of world wouldn’t protect itself by default? This all probably sounds stupid, but that’s the belief I held.

Isn’t it interesting that we as a culture (at least in the west) used to tape things? In the 1980s and 90s, it was common to use a VCR to record things off TV (or other VHS tapes) or record songs off the radio (or from other cassette tapes). I’m sure not everyone did this, but to my then-child eyes, it seemed like it was pretty prevalent.

What was so interesting about it was we were sort of appropriating media for our own uses. Television dictated “you watch this show when we tell you, or not at all” and taping culture said, “No, I’ll watch it when I please” or “I want to keep this around for reference later.” Radio and the music industry said “You either listen to the music (and ads) all the time, buy our tapes and records, or don’t listen at all” and again our culture had these little tools of defiance where we made audio our own.

The mix tape was a great fallout of this. Not only were we making copies, we were recombining copies as we saw fit! Maybe the perfect playlist for you had jazz and hip hop, but good luck waiting for the music industry to put out a tape like that. Fuck it, make it yourself.

Everything was and is a remix, yes, but without taping culture these remixes were often made and experienced en mass, created and consumed largely via entertainment industries. But now we could remix on our own.

Things have changed today, as they always do. For starters, most video and audio is copy protected (something tells me the industries sorta didn’t like home taping?). And with things like Netflix and Spotify, the need to record something to time shift has diminished. No real need to record something when you can just play it at will from a service, anyway. There’s also Tivo, which seems to fill the same niche as VCRs, albeit with a little more computer involved.

But it seems like the whole cultural idea of “taping” has kind of evaporated. Yes, it’s often technically possible to make copies of things (you can make or download copies of movies, music, etc), but culturally it’s not something we do as often anymore.

The closest things I can think of are apps like Tumblr, which allow you to do a kind of constant drive-by remix of a never-ending flow of “content.” This is similar, I guess, but it feels much less like you’re appropriating the media you want, and instead like you’re just redirecting copies of bits into your own personal ephemeral stream. It’s not that one is necessarily better than the other, just that it’s different.

Also cameras. With cameras in our pockets wherever we go, we now have appropriation devices. We can make crude copies of what we see, visually accurate but otherwise lifeless renditions of the world. I can and do take pictures of pretty much anything that interests me, but I also take pictures of things I want to remember, things I need to do (like travel receipts I need to get reimbursed for). I make screenshots of text conversations I want to hold on to.

The camera + screenshots are a common way we appropriate digital data on our phones, but the OS makers don’t seem to take advantage of this. The camera + screenshot + appropriation culture is brimming with potential, but relatively stunted due to the software available.

Do you think we still live in a taping culture? Has it largely evaporated in favour of large industries telling us when and what we do? Or do we as a culture still do make our it our own?

I recently re-read (and re-loved) Derek Bickerton’s book on language + human evolution, Adam’s Tongue. I previously read the book in 2010, and I remember enjoying it, but feeling like a lot of it was over my head, so I’ve decided to re-read it with fresh eyes in 2016, and wanted to write a little review of it.

On its surface, the book is about how language evolved in humans, and how language was crucial to our evolution as a species, but what I love about this book is it’s about so much more.

One thing the book covers really well is how evolution works. It talks about Darwin and Richard Dawkins (natural selection and selfish genes, respectively), but it also talks about how those viewpoints are often limited. Bickerton really gushes about a relatively new view on evolution, that of “niche construction theory” which explains, essentially, how species are changed by their environment, but crucially, how species also change their environments, too.

Bickerton spends a lot of time not only talking about evolution, but also continuously emphasizes fallacies we hold about evolution. The big one is how we view evolution with homo-centrism: we see evolution only in terms of ourselves, and often put ourselves at the centre of it. When we look at evolution with this fallacy, we’re essentially looking at all animals / life forms in terms of how they compare to us, when in fact, evolution does not care at all about us. There’s really no centre to evolution, Bickerton says.

A specific example of that fallacy is how we often look on Animal Communication Systems as “failed attempts at language,” but really they’re just successful attempts for those animals to communicate. They’re not bad versions of language, they’re good versions of ACSs.

I’m really grateful he’s gone to such lengths to repeatedly point these sorts of things out, because I’ve found it eye-opening when considering what little I know about evolution. And, I think these viewpoints apply to non-evolution topics as well.

Another nice thing the book does is that it doesn’t hide the fact of other viewpoints on language evolution. Although he argues his disagreement with these other viewpoints, the author at least acknowledges and explains the other perspectives. He’s not running anybody’s name through the mud, but he does explain their arguments, and crucially why they don’t hold up to the scrutiny of his researcher + perspective.

In fact, an entire chapter is devoted to dismantling a theory put forward by Noam Chomsky et al about language’s supposed spontaneous evolution (I’m not sure if I’ve parsed the argument well enough to distill it here, but suffice it to say it was a thorough deconstruction). It’s refreshing to read opposing viewpoints, not so they may be shamed or humiliated, but so they can be contrasted and explored from different vantage points.

This book was an eye-opening read about language, evolution, and the history of the human species. It’s about what makes us us, and about how that very us-ness enables us to reflect on us. You should definitely read this book.

Often when I suggest a book to friend, they’ll say “Excellent, looks great! Added to my forever-growing ‘to read’ list of books 😞.” I definitely sympathize with this sentiment: there are just so many books and so little time to read them. As I’m currently working my way through lots of books, I thought I’d offer some unsolicited advice on how to read a lot of books.

The first and most important thing is consistency. Find a rhythm for reading that works for you and stick to it as best you can. Plan to read every day, even if it’s only for ten minutes. Ten minutes of reading every day is a lot more than zero minutes of reading, nevery day.

If you have a commute involving public transit, that’s a great time to fit reading into your day. My commute is pretty short each day, but the time adds up. When I used to work from home I’d set aside cool-down time after work ended but before I started my evening, giving me a kind of reading commute instead.

I consider myself to be a pretty slow reader, so consistency has been the key for me. Slow and steady finishes books.

The second suggestion is to find a good reading environment, the place where you read. I find reading requires a lot of focus, so I try to read in places where I won’t be distracted. That can be almost anywhere for me, but there are things which intrude my concentration.

Phones and computers are a huge distraction. Every notification or badge or buzz destroys my focus and makes reading much, much harder. So, keeping my phone away (or off) is really helpful here. I tend to read paper books for many reasons, but one is they lack any inherent distractions!

Television is my ultimate focus destroyer. I find it nearly impossible to read (or write!) when there’s a tv on anywhere in my home. Interestingly, a crowded subway is a much easier reading environment than a home with a television on. I think it’s because tv is designed to grab your attention at all costs, and it’s very good at this. If you’re trying to read while somebody else is watching tv, try playing some music to drown it out (jazz works well for me) or even better, invite the tv watcher to join you in silent reading!

My final reading suggestion is to stay motivated about reading. This can come in many flavours, but here are the three things I do:

One, I keep a spreadsheet of all books I’ve read, with a little bit of info and a review about each of them. This helps me see my progress in getting through books, and lets me glance back at any notes or thoughts I may have had while reading. You definitely don’t have to do this, especially if it feels like work to you, but I find it’s a useful way to keep me going.

Two, get excited for your next book. Whenever I read a book, I find it motivating to think about the book I’ll read after this one. That gives me something to look forward to and it helps me finish my current book. You don’t have to have a concrete ordered list of all books you’ll ever read, but it helps to plan one ahead, one you can’t wait to get started. If your current book is a slog, this will help (and if it’s too much of a drag, maybe stop reading it?).

Three, go to a bookstore often. Nothing in the world makes me want to read more books than walking around a bookstore. You don’t have to buy a book every time (though often I do…), but I find just being around a bunch of books and book lovers really makes me want to read all the time. Seeing the books, picking some out, walking around different sections, etc. Amazon is great for many reasons, but it’s an entirely different experience than walking around a physical store.

These are my main suggestions on how to read more. It can seem like an uphill battle at times, but the more you read, the easier it gets. As they say, the journey of a thousand books begins with a single page.

I want to talk about something I’ve been noticing in how people converse online, in particular publicly in networks like Twitter and Slack. A lot of this conversation seems to be argumentative, which misses a great opportunity to grow understanding in communities.

By “arguing” I don’t really mean people having shouting matches or otherwise having heated or nasty conversations, I mean the literal sense of the word, having a reasoned, rational, and relatively polite debate. Most of this applies equally to the nastier version of arguing most of us think about on Twitter, but I’m going to give the benefit of the doubt and talk about the kind of arguing that happens at best on Twitter and Slack.

What I notice goes something like this: Somebody will make a statement, then one or more somebody elses will reply to that statement, agreeing or disagreeing, with reasons supporting their stance. Again, it often ends up meaner and less reasoned online, but I’m talking about the best case.

As far as debating goes, this is pretty run of the mill. But the problem is lots of subtly gets left behind. When all you’re trying to do in a reply is try to prove or disprove a statement, you ignore the nuance of what’s being said, and you don’t allow any of it to enter your worldview. There is no space for “Oh, that’s interesting! How does that relate to…” there’s really only room for “I disagree, here’s why…”

But it’s hard to fit that kind nuance into a Twitter discussion. And while Slack lets you type long messages, the flow of Slack often doesn’t leave time for contemplation (at least not in a group setting). It’s not impossible on these networks, but these media really don’t want you thinking about the subtleties. So while possible, it’s not common.

A lot of what I publish here isn’t so much to be right or wrong, isn’t so much to prove a point, but instead it’s a way for me to share something I’m thinking about so that you, reader, can see a potentially different vantage point. You may disagree with some (or all!) of it, but I hope disagreeing with it doesn’t mean you ignore everything I say.

For most of my life I’ve tried to have discerning ears and critical eyes about what I read, hear, and learn. It’s not that I’ve just taken everything at face value and believed it all. But I think in recent years I’ve started to approach what I read or hear with more nuance. Essentially I’ve started to really internalize that there usually isn’t such a thing as “the whole picture” when learning something, or as a “correct answer” when trying to figure something out. There’s no perfect political view, and there are no silver bullets.

What they teach you in school, for example, is often slightly or entirely incorrect. But even when what they teach is entirely accurate, it still leaves out different points of view, different histories, because there just isn’t enough time to delve into everything.

At their best, schools have to make value judgements about what’s most important to be taught. Unfortunately, this usually doesn’t include teaching the fact I just described, “Hey kids, this isn’t the full story, you should know that.”

I think the idea “this isn’t the full story” is a big one for me, because I’ve started to internalize there really isn’t a full story in the first place. But there are so many details we ignore if we assert to ourselves we know everything about a topic.

I was having a conversation with a co-worker recently where we talked about work processes, and how we don’t have all the answers figured out yet, but that we hope to find them soon. That got me thinking as to what we consider an “answer” for how we work. I’ll use the example of code review at my software development job, but this should apply, in the abstract, to any kind of thing you do at work.

Our “answer” to code review is to follow a set of steps on how to do it. This is our code review process, where we do one thing after another until the code review is done, and it works pretty well. But while the steps are easy to follow, this answer, like most answers, isn’t perfect. In particular, it has no mechanism to change itself.

But what if we get a little bit meta on our problem and say “the answer to the problem of code review isn’t so much ‘what are the steps to do code review’ but instead, by which process do we decide those steps in the first place?” Now it becomes much more interesting.

So the “answer” to code review becomes a process for finding out how to do code review. Instead of just being an unchanging set of steps, the “answer” now becomes a method for figuring out those best steps.

Day to day, this probably looks exactly how it did before we changed our point of view on it. But with this new perspective, we’re able to evolve how we do things as we go along.

This meta perspective isn’t just useful for code review, or just for job related things, but I think can be applied anywhere you need an “answer” for something. Instead of treating the answer as a finite thing, treat the answer as a process for finding answers (and go as meta as you please).

Some countries use this technique for their governments. The United States decided the answer to tyranny isn’t really a specific person or law, but instead a process for avoiding tyranny called democracy. On the surface, democracy seems similar to code review: a set of steps you follow (voting) to achieve an outcome (leaders). But democracy also includes the process by which the leaders lead by an evolving system of law, among other things.

The idea of answers as an evolving process itself isn’t definitive, and not a solution for everything. But it may be a useful tool for your cognitive tool belt.

When I started this website in 2010, I knew what a successful blog was. It was a blog with thousands of subscribers, and ideally, enough ad revenue to “take the site fulltime” and be paid to blog all day. It wouldn’t hurt if you participated in a community with other bloggers, too.

That was a great definition of a successful blog in 2010 and I think it’s still a great definition in 2016, too. But damn is it hard to achieve. By that metric, I can really only think of a few select sites which should be considered successful. That’s kind of funny, isn’t it?

Let’s consider alternatives.

The biggest metric of success for me hasn’t been subscriber count (which is easy to say because I have a small subscriber count anyway), but more the quality of the people who subscribe. Specifically, I find the people who tell me “hey I love your blog” or “that post your wrote last week really spoke to me,” not only are those wonderful things to hear, but they also tend to come from people I respect tremendously.

So, one form of success: few, but highly respected people read my stuff > oodles of people I don’t really know read my stuff. (True, they’re not mutually exclusive, but if I had to pick one, I’d pick the first any day).

Another definition of success is longevity. I’ve been running this site since 2010 and it’s quite remarkable to be able to refer to 6 years of my public writing on the internet. I’ve had my ups and downs in terms of quality, but this is one of the few projects I’ve stuck at for this long. The posts may not make me money, but they’re a public outcrop of some of my thoughts, linkable for all to read.

The final definition is kind of a mix of the two: I feel a major success whenever anyone refers to my posts. I don’t just mean normal links from other blogs (although those are of course great), but when somebody refers to one of my posts to help them understand or reason about something. When somebody points to my post and says “this! this is what I’ve been trying to say!” There’s pretty much no better feeling of success than having a company you’re interviewing at say “I know you’re a blogger because we refer to some of your posts in our internal wiki as part of our dev process.” How much more 😍 can you get?

There’s a lot of talk about the “death of blogs” but maybe that’s because our definition of a thriving blog requires it to make oodles of money it just can’t these days. But if we change our definition of a thriving blog, we see many are doing pretty OK! I look around at some friend-blogs, like Ash (who in large part inspired me to start writing) and Soroush (who in large part inspires me to continue writing) and theirs are doing stupendously well today. Blogs aren’t dead, we just have outdated perceptions of them.

I find so much of reading a book takes place after I finish the last page. For me, someone still relatively new to reading books for pleasure, I find books really grow on me after I’m finished reading them.

Part of it is definitely letting my brain gel on the topic I’ve just read. After I’m done a book, it usually mentally goes on my back burner, but I often find myself making mental connections to what I’ve just read pretty often after I finish reading.

Ideally, I’d like to formalize this process a little better, by taking more time to reflect on the books I’m reading (among other things). I’ve never been a super thorough note-taker, but it seems like a good way to reflect on what I’m reading. (It also kinda feels like work to me, which is perhaps why I don’t take reading notes!)

But there’s value in this extra churning. Even if a book is kind of a slog to read, I’ll usually try my best to finish it, because I’ll often get more value out of these books after they’re done than while I’m reading them. It’s these extra connections, made with other books I’ve read or experiences I’ve had, which draw out the value in a book. I suspect the more books I read, the stronger this gets.

“What’s the number one killer, worldwide?” asks Jason Brennan, CEO and founder of Frankenstein, Inc, a stealth mode startup Speed of Light is bringing you exclusive coverage of. We’re sitting in the Geneva Lab of their Palo Alto campus, where he’s talking about his company for the first time.

“More than cancer and heart disease and malaria, the number one killer worldwide is of course death itself,” Brennan answers. “We could cure all the other diseases, but eventually humans will still die of natural causes, so why even bother curing malaria or whatever? What we’re doing is much bigger than that.” Frankenstein’s plan is kind of ingenious: users take a daily anti-death supplement to help slow, but not stop ageing. A user death will still eventually occur, but Frankenstein has a revival device which they say is extremely successful at user revival. Web services typically measure their uptime by how many “nines” of uptime they have (e.g. 99.99% is four nines). Brennan says their revival units are good for five nines of revival odds.

“My mother always told me about money, ‘you know you can’t take it with you when you go.’ Her solution was to enjoy your money and be charitable while you can,” Brennan says with a smile, “but I’d rather just not die in the first place.” Brennan said he’s doing this by following his mom’s advice, funding Frankenstein with the vast majority of his personal wealth. “But I’m still charitable; I’ve donated lots to teach kids Javascript, there are just so many jobs out there still, so what better way to help the kids.”

Brennan seems either unaware or unconcerned about the irony when asked about his startup’s namesake, “I mean everyone’s seen a Frankenstein movie, but I like to think our approach is a little more civilized.” When asked how it compares to the book, he said he “[hasn’t] read the book yet, but it’s on my list. I heard it’s written by a woman too which is good because I’m trying to read a few books by women, you know?”

Frankenstein is still in private testing for now, but plans to launch a public beta this winter in Europe. Despite their challenges, Brennan is excited. “We think the launch is going to be out of control. We think it’s going to be a runaway hit.”

Yesterday, Elon Musk unveiled SpaceX’s spectacular vision of interstellar space travel and the colonization of Mars. Their video, while dazzling, is scant on details (which as visions go, is fine), but it’s the detail at the very end of the video which leaves me unsettled: the terraforming of Mars.

I think terraforming Mars (the act of altering a planet’s climate to be similar to Earth’s, with breathable air and bodies of open water) would be a huge mistake. Yet if you look around much of the tech world, nobody is even questioning it.

SpaceX’s vision is suggesting, without displaying even a cursory amount of thought, that we should dramatically and irreversibly alter the fundamental climate dynamics on an entire other planet. Mars has plenty of water locked in ice, we just need to warm the planet up and bingo bango, we’ll have lots of liquid water to splash around in.

This is bad for two reasons:

First, we don’t yet have a very good track record of building an advanced technical civilization that doesn’t totally ruin the environment of a planet (e.g., Earth). I’m thrilled Elon Musk works on electric cars and solar cell technology. Both technologies are necessary for an environmentally friendly technological civilization, but neither are sufficient for one. We need much more: a strong fundamental indoctrination of environment respect and preservation, new systems of government and (crucially) education to help populations thrive in new frontiers. There’s probably a lot more I can’t even think of, which brings me to…

Second: hubris. It’s incomprehensibly hubristic to think terraforming another world is a mere technological detail to be glossed over and figured out later. We can build space-faring rockets, what’s so hard about radically overhauling a climate? The hard part isn’t so much the physical alteration of a planet (we’ve managed to do that quite well on Earth, and we didn’t have to think about it!), but how to think about altering a planet. We’re not enlightened enough to deal with that, yet.

I am in full support of exploration of our Solar system. I think it’s crucial to our learning as a species, as representatives of Earth. We stand to gain so much by exploring new worlds, like where we came from, like if we have siblings among the stars. And eventually, yes, I hope that we’re ready to one day thrive on new worlds, but we have so many questions to answer first.

outer space, including the Moon and other celestial bodies, is not subject to national appropriation by claim of sovereignty, by means of use or occupation, or by any other means

We don’t have much precedent for companies attempting to claim ownership of celestial bodies.

What makes us entitled to the rest of the solar system? Is it ours to do with it what we please? Is it our manifest destiny? To let our capitalism, which has thus far ravaged our home planet, extend endlessly into the vastness of space, pillaging ever more worlds?

There are so many examples of human misuse of the Earth that even phrasing this question chills me. If there is life on Mars, I believe we should do nothing with Mars. Mars then belongs to the Martians, even if the Martians are only microbes. The existence of an independent biology on a nearby planet is a treasure beyond assessing, and the preservation of that life must, I think, supersede any other possible use of Mars.

I don’t have answers to these questions, but we desperately need to explore them before we start fucking up other planets. They are not a technical detail to be figured out later, they are among the most important questions our species will ever ask.

To:Old Friend

Sent:Tuesday, August 16 2016

Have you ever done a thing and then wince at the very thought of it basically as soon as you’ve done it and then forever? That’s basically what I do, all the time. It’s fun, you should try it.

I sent you a message a few minutes ago and in my head I was like “Oh hey I’ll just make it really short and peppy and that’ll be good.” thinking to myself how’d it’d been a long time and so I didn’t want to send you a long diatribe masking anything. I’d just be all aloof and that’d be an easy way to start a conversation.

But oooh, there’s that embarrassment creeping up on me.

The internet is so tremendously weird. It’s lovely and it’s terrifying all wrapped up into one big mess.

I wish catching up with people on the internet was more like the Dandy Warhol’s (“A long time ago, we used to be friends”.. I know the song is more about moving on, but it’s catchy and fun, whatever) and less like “I’m lonely and it’s Friday night and we used to be friends, so let’s ‘Connect’ on Facebook” bleh.

Is there a nice middle ground that doesn’t involve one person sending the other a longish message out-of-the-blue? (oops) Or that doesn’t feel like bad nostalgia? Probably not.

Anyway, I was thinking to myself lately about how I’ve really connected with exactly 5 people total, ever, in my life, where I’ve had regular, honest conversation and that’s one of my favourite things (you’re one of those people, of course).

I’m guessing there’s like a 90% chance this message is just going into a void somewhere. Or like maybe one of your distant descendants will discover one day, some kind of Indiana Jones-like character, spelunking around the internet, trying to discover relics of the ancient past and they find this. Sorry, if that’s the case.

When I was a broke university student, I used to look toward the future when I’d be a well paid software developer. I thought to myself, that’ll be great because I’ll be able to afford a new iPhone every single year! That’s what All True iOS Developers do, right? If you read the Apple blog / twitter world, that’s certainly what you’ll hear. We buy a new iPhone every year; that’s what we do.

I’ve been hearing a lot of grumbling about the impending iPhone 7 and its supposed lack of a headphone jack. John Gruber jacked off about it last week, and lots of people are talking about it. Ugh, that’s really going to suck if they get rid of it, right? What am I going to do if I can’t use my headphones?

Here’s a suggestion I can’t believe I have to make: maybe don’t buy the new iPhone? I mean, if you’re an iOS developer, presumably you’ve got a fairly recent model already… there’s no real need to buy another one, especially one you seem a little sad about.

I never ended up buying a new iPhone every year, either. So far I’ve been getting one every two years. By this logic, my iPhone 6 would be up for replacement with this year’s iPhone 7, but now we’re at the point where this two-year-old model is so good even today, I feel no need to replace it. It’s still mighty fast, has a great camera, great battery. It’s a perfectly good device; replacing it would be a waste.

And that’s the other thing, too. It’s a waste of money to get a new phone every year, but it’s also a waste of resources (do you really need 5 iPhones sitting in their boxes, collecting dust?). It’s wasteful on the environment, and I dunno, rampant consumerism just doesn’t seem like a great thing, either. I’d love to get 5+ years out of a phone, wouldn’t you?

So, if the idea of losing a headphone jack on your phone seems unappealing to you, remember that you don’t have to buy it.

Over the weekend I re-read Neil Postman’s fantastic Amusing Ourselves to Death, which I can’t say enough good things about. Seriously, this book is about as Jasony a book as they come, and no doubt a large influence on what makes me Jasony in the first place (previous post about the book).

If you haven’t read the book (shame on you), it’s essentially about how media shape the kinds of public discourse we have (specifically politics, current affairs, and education), and how America’s shift to a predominately television-centric country diminished its ability to have serious conversations about these issues.

Postman argues public discourse in America was founded at a time of pervasive (book) literacy. The media of print entails memory: arguments can be complex and built up over pages, chapters, and volumes; the reader must take time to think, process, and remember what they’ve read; books allow us to learn the great ideas of history and of our current society. There were (and still are) plenty of junk books, but books and print supported well-argued, serious discourse as well.

Conversely, in television we find a medium of entertainment. Like print, there is much junk content on TV, which is just fine. The problem, Postman argues, is when television tries to be serious, because it fails in spectacular ways. Television is an image-centric medium, and as such it’s impossible to have complex, rational arguments for or against anything. Think about how dreadfully boring a “talking head” is on TV news, and those usually only last for a few minutes at a time!

Where print requires you to remember, television requires you to forget. Instead of long, coherent discussion, you have a series of images strewn together which are almost meaningless. In his chapter “Now…this,” Postman looks at tv news as an example of this. Most news segments last about 60 seconds, and are placed in an incomprehensible order. A devastating mass murder, now a political gaffe, now a car recall, now unrest in the middle east, now an advertisement for retirement savings. Not to mention immediately following the news is Jeopardy.

Amusing Ourselves Today

“But Jason!” I see appearing in a thought bubble over your head, “the book was published in 1985, when television was the media in America, but these days its been displayed by app phones and the Web. Is this book still relevant in 2016?” Absolutely, unequivocally, yes.

The good news is, some software allows for interactivity and personal agency. Through email, blogs, and forums (i.e., written word), we can have complex, well-reasoned discourse (I said can). We can even improve some of the shortcomings of the printed word, by pulling in various sources via links, by including images and interactive, responsive diagrams and graphics, and by collaborating with many people around the world.

Software does not require us to sit quietly, mouth agape, awaiting amusement. But today’s software does ask us to do so, relentlessly.

Much of what we do with app phones is largely incoherent. I’ll read an email from a friend, now I’ll check twitter, now I’ll check Instagram, now I’ll write some code. And too often, even just within one of these apps it’s all incoherent. First, remember that for the overwhelmingly large majority of software users, today’s social software is “what you do” with a computer or phone; Facebook is the computing experience for many people. And within an app like Facebook or Twitter or Instagram, you have a series of things strewn together in a “feed.” An article about Donald Trump, now your cousin’s baby’s 2nd birthday, now (lol) a video of this goat who faints when its scared, now hey cool an ad for Chipotle.

Or take Instagram for example. True, you’re consistently getting images, but that’s about it. There’s no space for discourse on Instagram. Image dominates, and the strongest message you can really send is a “like.” There is literally little space for discussion, and the discussion is largely irrelevant anyway. Instagram shows, it does not discuss.

Books and Beyond

My interpretation of Amusing Ourselves to Death is its thesis goes beyond books and television, and again focuses more on how media relate to discourse. It’s not to say that the printed word is some kind of ultimate medium for discourse, just that it’s presently much, much better at it than is television (and I think, most of our software, too). There’s nothing wrong with media that entertain us, the problem is when a medium only entertains us and is incapable of having cogent conversations about anything else.

All throughout middle school, high school, and much of university, MSN Messenger was the place for me and my friends to socialize online (if you’re my age but grew up in America, chances are you can replace MSN with AIM). MSN was an instant messaging system. You had a contact list, online / away / busy / etc statuses (with custom status messages), and usually had one-to-one chats (although you could have multiple people, too).

You knew your friends were available to chat because they had their status indicated. An “online” status meant there was a good bet if you messaged them, you’d get a response rather quickly. “Away” meant they were logged in, but probably not at their computer. “Busy” meant they were present, but didn’t really want to be disturbed. These weren’t hard and fast rules (someone could appear to be any status, but still be present anyway, and vice versa), but you generally felt a sense of presence with your contacts. You at least knew what to expect, generally, when you messaged somebody.

These days, it seems like Instant Messaging, as a concept, has largely vanished. In its place we have things like iMessage and texting (I’ll admit, I don’t have a Facebook Messenger account. Do a lot of people use this?), but we lose a lot with them. Sure, iMessage means you can send a message whenever, but you also lose the feeling of presence you got with IM.

Because there’s no concept of “online” or “away” (etc), you have no idea if the other person is available to chat at the moment. Where IM chats often felt engaging while both people were online, iMessage “conversations” feel sporadic, like a slow trickle of words back and forth. Sure, sometimes you do have bouts of back and forth messaging with iMessage, but more often than not a message is a shot in the dark (consider how gauche it is to text somebody “brb” or “gtg"). The expectation is the conversation never really ends, but in fact, it never really starts, either.

And who knows, maybe this is just me. Maybe everybody uses Facebook Messenger, or maybe everyone else just has more engaging friends they text or iMessage. I use Google Chat and literally IM with two people ever, these days. But I really miss having nice long conversations with my friends.

What about you? Do you have engaging conversations over iMessage / texting? Does everyone just use Facebook Messenger (or another IM service)? Or is it really a lost art?

“You’re Canadian? You don’t have much of an accent” people tell me when they find out I’m Canadian. It’s true, I’m from New Brunswick, Canada, but I’ve never had much of an East Coast accent, and much of it has faded since I moved away from home a few years ago. I never really minded in the early years because I was a little embarrassed by it (my home region is generally considered a little backwards by the rest of Canada), but lately I feel like I’m losing a little bit of my identity because of it.

There are many telltale signs of a New Brunswick / East Coast accent. The big tell are our hard Rs (“are are harrd Rs”), though that’s common to most of the region (I correctly identified Kirby Ferguson of Everything is a Remix as an East Coaster on his hard Rs, alone). More specific to New Brunswick is our unmistakable lexicon, like “right” (pronounced “rate") to mean “very” (“it’s right cold outside”), “some” to mean “quite” (“it’s some busy at the mall”), “ugly” to mean “mad” (“she was some ugly when she heard the news, let me tell ya”). We drop suffixes (“really badly” becomes “real bad”), too. And I’m pretty sure we invented the “as fuck” intensifier (“it’s cold as fuck right now,” “I’m tired as fuck”) long before the internet caught on to it.

I took a linguistics class in university (which I highly recommend, by the way), and we learned about language extinction, that many languages are disappearing and we’re left with less and less as time goes on. I asked my teacher why this was a bad thing, but I kind of got a funny look (I meant the question genuinely, not in a rhetorical or smarmy way; at the time I didn’t really understand why a lack of diversity in language was so bad). I think I understand the general sentiment a little better now.

Since moving away from home, I’ve definitely lost much of what I had of an accent. When you’re not surrounded by speakers of your dialect, it’s feels weird using words or sounds you know will stand out to people you talk to. My Rs have softened, my “eh"s have disappeared, and even the most quintessential Canadian word has changed: my “sorry” has gone from the Canadian “soar-y” to the American “sar-y.”

It’s a weird kind of identity crisis to either sound normal to yourself, but weird to those around you or to sound weird to yourself but normal to those around you. But I’m trying to reverse course by calling it out (and by watching copious Trailer Park Boys). Though the sound of the word might change, I’ll at least always say “sorry” when I bump in to somebody—that Canadian part of me will never fade.

How odd is the juxtaposition between our mass consumption culture and the meaning of our lives? On the one hand, mass consumption gives us a perspective of the unlimited: there’s always more to consume, it’ll always be there, it’ll always replenish. On the other hand, our lives are inherently finite: you only get one childhood, you always figure out life too late, youth is wasted on the young, you’re going to die someday.

It’s kind of distressing to think about. Mass consumerism asks us to buy in (literally and figuratively) to the idea of limitlessness. It asks us to ignore, to not even think about, the fact that our lives are not at all limitless. There will be a new iPhone every year, the grocery store shelves will always be restocked, but I’m 27 years old and my childhood is long over and I’m never going to get another one.

Maybe it’s more comforting to think in the consumption mindset, that there will always be another book, another tv show to watch on Netflix, another hamburger to eat at McDonalds, a longer infinite list to scroll through. But it’s also really dissatisfying how little that lines up with my life, how much, in fact, it denies what my life is like. Consumerism doesn’t give me a frame of reference to make sense of my life, to understand what it means to age or to have a finite set of choices (and I bet looking at life as “a finite set of choices” only makes sense as a perspective because of consumption culture; we probably wouldn’t look at life as being limited without mass consumption as our default way of looking at the world).

I’m sure this is well covered in philosophy and I’m certainly not suggesting I’m the first person to think of, just that, jeez this sort of thing has been hitting me hard lately and I don’t know how to make sense of it.

I wanted to expand a little bit on a tweet I made the other day about aliens in science fiction movies. There’s an opportunity in these movies to explore western society’s fears about immigration amongst Earth’s peoples (immigrants referred to as aliens), but most movies don’t seem to do this.

Most movies about aliens see them as invaders and earthlings as the heroes, defending the homeland. My friend Brian pointed out to me these movies (and fears) aren’t about immigration but colonialism. The aliens aren’t looking to join us, they’re looking to conquer us. It’s a great point, and I think it matches up with fears many people hold about immigration, but I think it’s weak of screenwriters to pander to these fears instead of exploring them.

Science fiction is a lens we use to see ourselves and our current world, it’s a way to extrapolate and play “what if?” and see more sides to our lives than we currently see today. In stories like A Brave New World and Nineteen Eighty Four, fears of oppression through technology were explored, not celebrated.

But in many of today’s alien-related movies, the fears of being taken over by aliens are reinforced, not examined. We’ve got our guns and we’re the heroes, nobody’s gonna take our land from us, we say. Why don’t we have more movies where oh, I don’t know, the aliens aren’t invaders but are refugees? Or where the hero says “Wait, hold on, are we sure they’re actually invading? Shouldn’t we learn from them before we start blowing them up?” Whether or not people really do think immigrants are invaders looking to oppress us, it’s cowardly for alien films to not examine this.

There are a few good examples, though. District 9 is particularly on the nose about aliens with a refugee status; there are humans who see those aliens as invaders, but those humans are portrayed as villains. E.T. has aliens not as invaders or as refugees, but as explorers who wish to learn. True, E.T. is a visitor, but he’s also explicitly not an invader. Despite naming the titular alien a “xeno-morph,” the movie Alien is a lot more about sexual predation than it is about invasion (the face-huggers and chest-bursters are not so subtle allusions to rape and its unwanted consequences). I’ve heard good things about Alien Nation about immigration, but I can’t personally vouch for it. And I’m sure it’s explored better in science fiction literature, too.

Immigration is a vital topic to pretty much everyone on this planet, yet fears of it are pandered to and reinforced in science fiction movies all the time.

PS: Yeah, maybe actual contact with actual extra terrestrials wouldn’t go so hot. They’d almost certainly be of vastly different intelligence, technical prowess, hell, even body chemistry (microbial exchanges alone could easily destroy us). They may not be violent invaders (that’s probably a reflection on our own evolution and history than it is on theirs), but they’d definitely have arisen from some form of natural selection, originally. But movies with “alien invasions” are hardly about presenting scientific reality, and that’s OK. An alien movie where they come here and we all get alienpox and die probably isn’t telling a very good story.

PS: Yeah, it’s also problematic to have actual aliens represent humans from different countries. Showing them as wholly different, often monstrously so, reinforces views that “aliens are other” which doesn’t help anybody.

Today the phrase “Not All Men” (often #NotAllMen) represents something pretty terrible. When feminists speak on the internet about the patriarchy, inevitably dudes will butt in with the phrase “Not all men!” to say, “Not all men are rapists!” “Not all men wish for inequality!” etc. I won’t go into all the details of why this is problematic because many better essays have already been written, like this one or that one.

But I’d like to reclaim this expression. I want “Not all men” to mean “I don’t want this thing to only have men.” For example, the programming team I work on currently has no female developers, so I want this team to be Not All Men, but include women (and people of any gender, too.)

I want casts of movies and TV shows to be Not All Men. I want people I see at conferences to be Not All Men. I want the CEOs and people in the news to be Not All Men.

To be clear, I know there are many women (and people of all genders) currently working very hard to achieve these goals, and I support that in every way. By reclaiming this phrase, I hope we can reinforce and help what’s currently being done. I hope the phrase can act as a reminder to us all that until we see teams of Not All Men out in the world, there’s still work for all of us to be done.

I’ve made a cheesecake, and I’m not a professional chef, but I’ve worked really hard on this one and I’d really love to share it with everyone, because everyone loves cheesecake. But nobody wants it, because they’re stuffed from all the other cheesecake (and pies and puddings) they eat all day, everywhere.

So of course this makes me sad. I worked hard on my desert and I think it turned out great. But social media is a potluck with way too much food. And even though you’ll only really connect with people sitting directly beside and across from you, it’s a potluck you simply must attend, because there’s so much good chow.

The following is a mishmash of thoughts following up from yesterday’s post about blogs and conversations. The real theme of today’s post is “I don’t really know what a blog is” and “that’s OK” and “blogging will probably die” and “is it just me or are these posts getting less coherent as time goes on?”

There isn’t really a strict definition for what a blog is, but it’s safe to say a blog is usually a collection posts about something, sorted by recency, and usually with some kind of way to subscribe (RSS or Atom, or these days Twitter / Facebook feeds). The form of blogs is always kind of undulating, evolving, following the people (see The Awl’s The Next Internet is TV about this).

So blogs end up less like books and more like news or other periodicals. Yeah, the blogs I’m talking about are personal blogs, not tech “news” or what you’d typically think of as a periodical, but they are based on time. You either come to a blog because you saw a link to it (where else, but on some sort of time-based stream like Facebook or Twitter), or you come to a blog to see what’s new, (maybe from a time-based RSS reader).

The medium of the blog is all about time. Thus its content is shaped around time. That’s why so many blogs posts are about current events and that’s why it feels like blogs should foster better conversations, and that’s why it’s so frustrating they really don’t.

I don’t really know what my website is all about. Maybe it’s my web diary, maybe it’s a place for public pontifications. But definitely at some level, I’m putting ideas out into the world because I care what people think about it. At some level, I want to spark something in you, the reader. I hope what I write tickles some part of your brain so you think and ideally, respond (maybe this is fundamentally manipulative, though? there’s another post idea for the future).

Yesterday, after writing my post in reply to Atul, Aza, and co., I was thinking about how much work it is to put together a post like that. You often hear people refer to blogs as a “conversation”, but if that’s true, it’s more work than any type of conversation I’ve ever had.

Compare it to other kinds of group conversation we can have on the internet:

IM, IRC, etc.

Twitter and FriendFeed

wikis (not all wikis are really conversation-friendly, but the original wiki certainly is)

email, discussion forums, blog comments

Writing a blog entry in response to someone else’s is far more difficult than any of those. Partly, it’s because blogging is often slightly more structured and polished than the other methods; but there’s also a lot of overhead in the actual act of writing a post.

This has definitely been my experience too. Trying to stitch together quotes and links to other blogs is incredibly tedious and error-prone. And if you use a format like Markdown, making sure you’ve got the quotes, lists, and links properly copied over is just that much harder. Everything’s so fiddly. Is it any wonder almost nobody does it?

When I started my website in 2010, I was really excited to jump in to writing on the web. There were blog conversations all over the place: Somebody would post something, then other blogs would react to it, adding their own thoughts, then the original poster would link to those reactions and respond likewise, etc. It became a whole conversation and I couldn’t wait to participate.

But I’ve never really had much of a conversation on my website. I’ve reacted to others’ posts, but I’ve never felt it reciprocated. I never felt like I was talking with anyone or anyone’s website, but more like I was spewing words out into the void. Some people definitely enjoy what I write, some agree and some even disagree with it, but the feedback has always been private, there’s never been much public conversation.

And I get it. Like Pat said, the interface to blogging doesn’t really encourage conversation, which makes blogging feel anti-social and lonely. My guess is blog comments were a way to make things feel more social, less isolated, but unless a lot of thought is put into them, comments become a total shitshow almost immediately (see Civil Comments, a promising attempt at fixing this). RSS lets readers subscribe to your posts, but you have no relationship with these people; ideally you want your readers to be peers so you can read their blogs, too.

There’s a lot of talk about the death of blogs, and it’s easy to understand why. Blogs are a lot of work to set up, they’re often fiddly to get right, people feel an urge to put out their best selves, and they have a terrible interface for being social. Not to mention how terrible writing on a touch screen is.

Luckily, there are still a few of us nuts around still writing on the web, who don’t really care if “blogs are dead” or not. But we sure could use some company.

Sometimes I’m developing on a particularly difficult task, maybe it’s a bug I can’t quite squash, or a feature I’m a little stuck on. But sometimes, when I get to that hard part, instead of hunkering down on it, my brain says “oh well, time to go see what’s on the internet!” This is the Dread stage of software development.

Between you and me, the logical part of my brain knows, yes, this is a bad path. When I encounter a hard problem, skipping off to the internet is the last thing that’s going to help me. But obviously there’s a compulsion in there that makes me do it.

This is pretty much procrastination 101, where I don’t want to do the hard thing, so I go do the easy thing instead. But I think it’s also compounded by working from home all the time: I don’t really head to Twitter to see cool links, but instead to hear from people. That’s unfortunately one of the messed up parts of Twitter: humans are mixed in with brands, and everyone seems to be linking off to something they find interesting; there never seems to be a lot of human conversation (other than impossible to follow shouting matches).

I’m not trying to excuse heading off to the internet, but I am trying to understand why I do it because I’m hoping that will help me prevent doing it.

This Dread stage only gets worse as time goes on: the less I focus on the hard problem, the harder it becomes. So the “obvious” solution is to keep a longer focus on the problem (easier said than done). But the underlying solution, I think, is to feel more engaged with the problems I’m working on. While I find working for Khan Academy to be immensely fulfilling, every app has its share of mundane bugs and features. I need to remind myself, yes, maybe this random UI bug feels pointless, but it’s in service of a greater goal (helping millions of learners have access to a free education). And it’s really hard to see that, especially when I’m a developer looking at code that could be in any app, that in fact this isn’t just a random bug, it has positive impact far beyond the bug itself.

It’s so easy to get lost in the minutiae of every day hard problems, and it’s so hard to remember, sometimes, why I bother. But I think it’s worth it in the end.

I heard this idea years ago (and naturally, can’t remember where), but it’s been in my mind ever since: programming is performance art. I’m not talking about the act of programming per se (although that could also be considered a performance), but that the result of programming is performance art.

Chances are, the things you and I program today won’t exist as programs in even just a few years. OS APIs, platforms, dev tools, even hardware, all continuously change, so much so that today’s apps will soon enough start to rot. It’s hard to use a piece of software unchanged for more than 5 years; more than 10 is almost impossible.

Software is not a medium that preserves itself. Old software is best preserved in writing, pictures, and movies (media whose own digital formats are still subject to rot, but it seems at least less so), but rarely can you directly execute the software itself. You can watch a video of Doug Engelbart’s oNLine System but you can’t play with the software itself (thankfully you can play with a Xerox PARC Smalltalk system, though).

There are some workarounds, but they’re rare. Writing for the web browser seems to be a good way at achieving some degree of longevity (Javascript in browsers seems to be quite stable, but maybe the dev tools aren’t). Writing and maintaining one or more layers of virtual machines seems another route, although I worry that’s just shifting the problem down a level of abstraction. I’m sure there are other solutions (ship the platform with the app?), but these are exceptions: the way software exists today is temporary.

The main way to prevent software from rotting, it seems, is to maintain it: update it so that it continues to work as the platforms supporting them change underneath. In this sense, though, it’s not the same software you started with, as it’s continuously changing. You can’t stand in the same river twice, they say.

It seems this is the way software is meant to be: a thing that exists, for a time. Software is not a book or a painting, software is a Broadway matinée or a parade. It may happen more than once, it may go by the same name, but every time it’s different.

I’m still in meta post land, and today I wanted to briefly touch on the slight redesign of my website (if you’re reading this in a feed reader, take a sec and poke around the real site). Here’s what’s new:

Boosted the type size way, way up. I’d been meaning to do this forever, but a recent essay about accessibility tipped me over the edge. Everyone can read big type, but not everyone can read small type, simple as that.

At the same time, I lightened the look of the page a bit: gone is the heavy black border around the page; instead I’ve got a lighter border, which feels representative of the old look, too, without weighing the page down.

Similarly, I moved the giant mast head below the first post. When you come to a post, you probably don’t give a crap the name of the site, and instead just want to start reading. If you really want to “click to go home” at the top of the page, you can still do that anyway, there’s a big invisible space at the top that’s a link to the homepage.

I got rid of the responsive jazz. When I last redesigned the site, “Responsive” sites were all the rage, and I used a column based CSS framework. It was nifty, but ultimately way overdoing what is essentially a 1 column website. Now that column is centered. Finally.

The site should still look great on mobile (where the design has become even lighter, and finally, Futura Condensed Extra Bold mast head on iOS!).

I fixed the El Capitan bug where all the type looked bold? wtf Safari? (I would have fixed this sooner but I have yet to upgrade my machine, and I was honestly hoping Apple would have fixed the bug by now. Ah well, fixed now).

That’s essentially it. Most of the changes are relatively small (except for the type, which is relatively big), but I think it makes for a much more readable experience.

Yesterday I talked about my guidelines for writing every day and today I want to talk about how I write every day. As I mentioned yesterday, regularity, without rigid rules, has been pretty key for me, but it wasn’t really clear to me until I gave it some thought, how to go about doing this.

In terms of physically doing the writing, I usually do it every morning before work and then publish more or less immediately after (let Twitter be your copy editor!). Writing first thing in the morning has worked really well for me because my head is mostly clear when I first wake up. I try to stay off Twitter / social networks before I get started, because they often pollute my head (sadly this is true any time of day) and make it harder to focus on what I’m trying to say.

Each posts takes me around half an hour to write, depending on how long the topic is and how much of a groove I’m in (as mentioned yesterday, this has gotten easier over time but I still struggle from time to time).

This groove is something I strive for, and it’s made easier by obsessively thinking about what I’m going to write before I start typing it out. This is your standard “literally walk around outside with the idea in your head / shower thoughts” sort of thing, but I find it helps me explore points I want to make in the post. As I’ve mentioned before there’s no real “true form” of the idea, what’s in my head and what gets written are different, but thinking about the idea before writing it definitely helps. And because I write one post per day, that means I get about one day to pick an idea and let it bounce around my head before I write about it.

The idea, which I keep in a todo list, tend to come from three primary sources:

My idle thoughts while going for a walk, riding the subway, doing the dishes, or writing other posts. I tend not to listen to music or podcasts while doing these activities and instead let my time be my time (i.e., don’t kill time).

Conversations with people. Jeez this is a great way to get ideas, take them from your friends! But more seriously, riffing with someone is a great way to explore ideas. (I wonder, what would a writing medium look like if was based on riffing with people?)

Reactions to things I read elsewhere, be they books or posts, or industry trends (in my head, many of these posts start with “I got a lot of problems with you people!” in George Costanza’s voice). Sometimes I rant, but often seeing or reading something inspires a little nugget of an idea, which eventually grows into a post.

When I have an idea for a post, I try to write it down as soon as possible (I embarrassingly forget them sometimes) and leave any notes I can think of on the subject so I’ll have something to start with when I revisit.

That’s about all I can think of for my writing process. It’s not perfect but it’s been working well for me. Though I’m writing mainly to get the ideas out of my head, I try my best to write accurately, to not assert anything I’m unsure of, and to note when I plain just don’t know what I’m talking about. I don’t want anyone to treat my writing with authority, but I’m so glad when people like what I write. It’s the best mental exercise I’ve ever done.

If any of this sounds like fun to you I highly recommend giving it a shot, and please let me know when you do, I’d love to read it.

I’ve been writing (and publishing) every week day on my website for almost two months now and it feels incredible. And it was a lot easier than I expected. Here are the guidelines I run with:

Post one thing almost every weekday.

Write it when you get up in the morning, before you start work (I work from home, so that helps).

Publish it when people are awake.

It doesn’t matter how long or well researched it is, really (but try not to write junk).

If I’m sick or on vacation or just really can’t post, don’t sweat it.

Do this until I don’t want to do it anymore.

That’s basically it. I’ve been unusually consistent (for me) at this in part because I treat those as guidelines, not hard and fast rules. Normally when I set a goal for myself it’s way too ambitious, I feel overwhelmed, and I bail on it. The usual me would have said at the start “I’m going to publicly commit to publishing one post per day, every day, for the next year.” and then I would have failed after 2 weeks.

But with this project, I’m trying to be as lax as possible. I wanted to write every day because I had a backlog of ideas to write about and because it was a good motivator to get out of bed a little earlier every day. I have no real goal in mind of write for a year or anything like that, I just want to do it until I don’t want to do it anymore. That feels so much easier and less of a burden than if I’d set some big lofty goal for myself.

None of my writing I’d consider truly amazing but that isn’t really the point. The point is for me to think out loud, get the thoughts out of my head, and have fun in the process. I was worried I’d quickly run out of post ideas, but my idea list is twice as long today as it was when I started (and that’s not counting everything I’ve written about in the meantime), so there’s no real end in sight (until at least, I get to a point where I don’t want to write any of the ideas in my list).

Writing every day has made it a lot easier for me to “just write” and I think it’s made me a better writer, but I absolutely still struggle from time to time, too. Sometimes I can just crack my knuckles (ew) and crank something out and it’s awesome. But other times I’ve struggled, deleted attempt after attempt, and eventually switched topics for the day.

It’d be easy for me to say “So, I’d failed at my projects goal and instead decided to do this writing-every-day goal instead, aren’t I smart?” but in reality it only looks like that in hindsight. The two were mostly unrelated. It just so happens that writing every day has helped me get into a better habit of practice and improvement, but it wasn’t done as an alternative to my failed goal.

(Huge credit also to my friend Soroush Khanlou, who wrote a post per week in 2015, he is a major inspiration. Mine are mostly furiously written and then published, but his are thoughtful, well researched, and edited.)

As I said in yesterday’s post, I think it’s better to be internally, rather than externally, motivated while trying to make great work. It’s better, I think, not to worry about what others are doing and instead focus on what I’m doing as a motivator for my own stuff.

And yet, I can’t help but keep coming back to this Bret Victor Showreel of his work between 2011-2012. In just two short years, Bret created (or at least, published) a prolific amount of groundbreaking work, month after month, sometimes week after week.

The first half of the class was to be graded based on the number of pots they could create throughout the semester. The more pots they made, the higher their final grades would be. […]

In contrast, the second half of the class was told that their grades depended on the quality of a single pot; it needed to be their best possible work. […]

At the end of the semester, [outside] artists were […] commissioned to critique the quality of the students’ work and overwhelmingly declared that the craftsmanship of the pots from the first half of the class was far superior to those of the second half.

The lesson I took from all of this was, if I wanted to make really great stuff, I have to be prolific, I have to make a lot of stuff, iterate on it, learn from it, improve it, and finish it.

So I set a goal for myself near the end of 2015: I was going to make and publish one project per month. These projects were to be mostly research prototypes of neat interfaces I’d been thinking up; I’d research them, prototype them, iterate, then write and publish a little essay at the end of each month.

It’s nearly April and you may have noticed: I have not at all succeeded at this goal. It turns out, this goal was pretty hard for me for a few reasons:

Research, prototyping, iterating, and writing take a lot of time.

I have a fulltime job.

I enjoy spending my free time with my wife, friends, and family.

I can’t seem to stay focused on things, or at the very least, I’m easily dist

Finishing and shipping things, even prototype demos, is a challenge for me.

Yesterday I wrote a bit about popularity and how I deal with (the lack of) it. Today I want to dive a little deeper into why I even care about it. Despite me writing about it this week, I don’t normally spend a whole lot of time consciously thinking about popularity or being liked or well known or respected. But it obviously matters to my brain at some level.

At the core, I think it’s part of being a human: we’re innately social beings and generally speaking, that’s a good thing. It feels good to our brains to be liked, to be a part of the group, to communicate with our friends, and, I suspect, our enemies, too.

Today’s online “social networks” definitely exploit this though. We’ve had this innate social ability for hundreds of thousands of years, and suddenly things like Facebook show up and majorly amplify our social tendencies to an extreme degree, and that makes us behave strangely.

What used to be a joke told to a physically present group of friends is now shared with hundreds of people on Twitter. Where I might expect a few in-person chuckles over the span of several seconds before, on Twitter I feverishly refresh to see if anyone has “hearted” or retweeted my quip. Did anyone like it? Does anyone think I’m funny?

Maybe I’m more socially obsessed than I’d realized. But I feel like today’s online social networks severely subvert what it means for humans to be social, in ways we haven’t adapted to yet.

i started wondering if social media is dangerous. Here’s what i’m thinking.

If gossip is too delicious to turn your back on and Flickr, Bloglines, Xanga, Facebook, etc. provide you with an infinite stream of gossip, you’ll tune in. Yet, the reason that gossip is in your genes is because it’s the human equivalent to grooming. By sharing and receiving gossip, you build a social bond between another human. Yet, what happens when the computer is providing you that gossip asynchronously? I doubt i’m building a meaningful relationship with you when i read your MySpace CuteKitten78. You don’t even know that i’m watching your life. Are you really going to be there when i need you?

Adam Westbrook talks about Vincent van Gogh and the benefit of doing creative work without the audience in mind.

It’s a wonderful video discussing van Gogh’s prolific work, even when nobody was buying his work. Westbrook argues van Gogh wasn’t motivated by onlookers or social success, but was instead motivated by autotelic goals:

Mihaly Csikszentmihalyi describes people who are internally driven, and as such may exhibit a sense of purpose and curiosity, as autotelic. This determination is an exclusive difference from being externally driven, where things such as comfort, money, power, or fame are the motivating force.

The video doesn’t really address today’s social landscape. Yes, van Gogh theoretically could have had a physically close social group (or a distant social group, as with his brother), but he couldn’t have had a social group with thousands of people like we have today. He wouldn’t have seen likes and favs and retweets whirl by him every day, and he wouldn’t have felt the same social pressures we have today, either.

I think internal motivation is ideal, and it’s something I strive for myself (make awesome shit that I’m proud of, and don’t care so much what others think), but I think it’s unfair to feel bad about caring what others think, too. I also think it’s important we examine why we feel so socially overwhelmed online these days, too (or at least, why I feel that way; I don’t wanna drag anyone else in with me), and that we demand better from social networks like Facebook and Twitter (like, for example, the work of Joe Edelman).

Popularity has a much bigger influence on my work than I’d like to admit. I try really hard to not let it bother me, but the truth is if something of mine becomes popular, it feels great, and if it doesn’t, then it feels crappy. The worst part is, it often feels random to me what’s going to become popular: I’ll spend weeks perfecting something I really care about and it goes nowhere; other times I’ll crank something out in 20 minutes and it becomes really popular.

This is frustrating. And it’s a big problem, not in the sense of how much it impacts me, but in the sense that there are a big number of interrelated parts to it. Like, why does popularity matter to me? what’s the point of my work? what are my real goals? what can I do about it?

I’d like to explore those in future posts, but for now I want to look at how I’ve learned to deal with (not) being popular.

(I’ll say right away, I’m proud of my work and I think I’m a good developer / writer, but I don’t think I’m amazing. I don’t think my work is always fantastic, or “deserves” to set the world on fire, but that it’s usually solid enough.)

The simplest way to make popular stuff seems to be to make it easy to catch on.

Give your stuff a click-baity title.

Make it catchy, give it a hook, etc.

Write about something very controversial.

People love lists and they love one sentence paragraphs, too.

That seems to be one way to make popular stuff (though I’m not entirely sure because I’ve never tried it myself), but that’s always felt kind of scammy to me. And it may very well be a way to make popular stuff, but I’m not sure it’s a way to make good stuff.

The other way to make popular stuff, we’re told, is to make really great stuff and then it’ll “just catch on.” That’s how the myth goes, at least, but I don’t really buy it. I’ve known lots of people who write or make truly awesome stuff but rarely does it catch on. Meanwhile, I’ve seen others make very popular but otherwise mundane stuff, too. Yes, it’s good to make great stuff, but it makes a huge difference if you have friends in high places, too.

But for me there’s a third option, and that’s to try my best to let go of the popularity urge. It used to really bother me, I’d obsess over it, I’d get jealous of other people whose work was popular but I felt they hadn’t earned it, and it was all really immature and never made me feel great. But letting go helped.

When I started obsessing a little less over the popularity of my work, I started realizing something really great: the few people who did really like my stuff were people I often super respected. Some of these were family members, some were close friends, some were even popular developers and writers whose work I tremendously admired. This core group of people who really like my work make a world of difference, and make “being popular” a lot less important.

I realize that’s kind of a cop-out (like when a company declines to hire you and you say “Oh that’s good, I’ve decided I actually didn’t want to work there anyway"), but that’s what works for me. Popularity affects me, of course it does, but I try not to let it get to me, and instead I focus on the people I respect who happen to really like what I do.

Here’s a thing that’s happened to me a few times in life: someone will say something novel, in a talk, a book, or in everyday conversation and I’ll say to myself “Well, of course!” The thing being said feels obvious now that I’ve heard it, but in reality, I probably wouldn’t have made the right connections on my own.

The first example that popped into my head is from a Bret Victor talk, where he says:

It’s kind of obviously debilitating to sit at a desk all day, right?

I heard this and thought, well yes, of course. It made complete sense once I’d heard it, but I don’t think I’d ever explicitly thought about it previously.

That’s the sort of stuff I often write about, too. I’m not writing groundbreaking stuff, but I am trying to make some connections I (and you) might not have otherwise made. It might sound obvious when you read it, but my hope is by writing it down, by giving it a name, whatever obvious thing I write about becomes just a little bit more tangible.

I haven’t read enough on the topic, but my guess is by giving something a name and making it more tangible, it’s easier to do things with that something. It’s easier to incorporate that named idea into what you know, what you think about. It’s easier to talk about that idea. It’s easier to apply, compare, and contrast that idea with other ideas. And not to mention, on the web, you can literally link ideas together (until all the links rot and you’re cursing the underlying architecture of the web again…).

So maybe you’ll read this and think “well duh” and I’m fine with that. But it wasn’t obvious to me.

If you’re a professional software developer there’s a good bet you’re pretty regularly emailed by recruiters trying to get you to join other software development companies. Developers are in such high demand there are whole teams of people whose job it is to try and hire us. As developers, we’re incredibly lucky.

And yet, the most common reaction I hear among developers on the public complaint service Twitter is that of dread. “Ugh. Another recruiter email, god.” “These recruiters are so lame, trying to get me to join this dumb startup.” “Way to send me an obviously generic letter.”

I’ve got to say, straight up, fuck that attitude. Our jobs are in such high demand that we’re regularly sought after by people hired to seek us out, and the general reaction is “ew stop”? I’m not sure developers realize how rare our situation is, how many non-developers search for months and months trying to find a job, when nobody’s hiring, and yet all we have to do is check our inbox once a week. Compared to nearly everyone else, we sound like spoiled brats.

Now I’m not saying there are never good reasons to complain about recruiters. Sometimes they’ll get your info wrong in careless ways (like addressing you by the wrong name, even though your email is your name), and that’s sometimes offensive. Sometimes recruiters are too aggressive. Perhaps you have legitimate concerns about how the company represented treats women, and you want to write a thoughtful, public article about why that is. But I think these sorts of complaints are vastly different from the “ugh, why do I get so many recruiter emails??”

We’re incredibly privileged as software developers, and we’re lucky to be so sought after. But when we complain about too many recruiters, we sound like snots to pretty much everyone else. Maybe we should reflect more on our lucky position, because it won’t last forever.

Them: No it’s very easy to use, you need to learn how. Here are the docs. How can you still not like it?

The basic gist is “I don’t like this thing” and the response is “Justify why you don’t like this thing” and it happens all the time. I run into it in all aspects of my life but I find it’s especially egregious amongst programmers.

There’s a mismatch here: I’m trying to share my feelings about something, and programmers are trying to get me to prove my feelings. I’ll admit, I’m often not so good at rationalizing my feelings but that’s kind of the point: I’m not trying to rationalize anything, I’m just saying how I feel about something.

But feelings often seem anathema to programmers, where rationality reigns supreme. If you can’t justify or prove a feeling, then it’s often treated as invalid.

For me, this is a morale destroyer. I’ve found trying to have a conversation with programmers so frequently devolves into an argument (in the “having a debate” sense of the term; a non-heated, civil discussion, but an argument nonetheless), and while arguments are useful for finding logical conclusions, they’re terrible for friendly conversation or chitchat.

Do you remember those kids in school who were really good in debate class? When they’d start treating every single conversation or interaction that way you wanted to remind them that not everything is debate. Sometimes you’re just bantering in the park.

And I get it. I’m a programmer too and I’ve done it too. And I’ve done it outside of my programming life, and it sucks. A non-programmer friend of mine bought a new camera and said “Wow it’s so great, it’s got a 10x zoom lens, a 4 megabyte storage—” I interrupted my friend and said “I think you mean 4 gigabyte storage, but go on.” The conversation abruptly ended; my friend’s demeanour went from joy to disdain almost immediately. And I felt like an asshole because I was an asshole.

Programming forces us to be so technically correct all the time we often end up forgetting how to be humans. But not everything has to be justified, not everything has to be correct all the time. There’s more to life than just being right.

Recently I was reading this post by the Facebook design team discussing the evolution of their new “Reactions” feature. It’s a neat article about their design process and the motivations behind the feature, but one thing in particular stuck out to me:

The whole point of expanding reactions is to have a universally understood vocabulary with which anyone can better and more richly express themselves. […]

Reactions should be universally understood. Reactions should be understood globally, so more people can connect and communicate together.

At the core this is an OK goal: “universally” understood ideally means any other person in the world should share the same meaning of a Reaction with you. But there are a few problems with this:

As Patrick Dubroy pointed out on Twitter, “7 icons is not about ‘a universally understood [vocabulary with] which anyone can better & more richly express themselves.’” They may be fine icons, but they’re a far too limited palette to express very much.

More troubling to me is at best, aiming for a universal norm homogenizes cultures into a lowest common denominator situation. We have so many cultures around the world, such a rich diversity of ideas, beliefs, and ways of living. I’m personally quite unfamiliar with most of the world’s cultures, but the solution to that isn’t to genericize them or translate them into my culture, the solution is to help me understand them as they are.

If all I ever see represented from other cultures are the things they have in common with mine, I’m never going to learn about what makes those other cultures different or special. I’m never going to empathize with them or the people who participate in them. All I’m going to do is reinforce what I already know, and the cultures I don’t know better will remain “other” to me.

Facebook’s mission is “to give people the power to share and make the world more open and connected” but distilling the diversity of all people and cultures down to seven genercized icons is no way to do this. If you want to make the world more open and connected, then you have to give people the means for empathy and understanding.

About a year ago I saw this video about the beginnings of new kinds of filmmaking in virtual reality. This is something that had never occurred to me before but seemed like a really cool perspective: instead of watching a movie on a screen, you are immersed in the movie.

There are all kinds of exciting prospects brought by movies in VR: what does it mean to be a “viewer” of a VR movie? is the viewer essentially the camera? what role does a director have in shaping that viewer-as-camera experience? does the viewer have control of their point of view during the movie?

Watching a movie today requires sitting for two or more hours, especially if multiple people are watching. As I wrote about previously, moving is a big deal and being stationary for hours long is not healthy. But the medium of film basically demands it. The movie plays start to finish with no letting up (with no intermission for you to get up and stretch your legs (unless you’re watching 2001: A Space Odyssey), unlike most theatrical plays). The director-controlled camera point of view means as a viewer, you only have a stationary perspective on the film and you can’t move that around.

The easy thing, the default thing even, to do in virtual reality moviemaking is probably to keep things largely how they are today. Viewers are seated for the duration of the film, but maybe they can control it with handheldremotes. A bunch of people plopped down on couches with screens glued to their faces for hours at a time.

But there’s no reason why this has to be the case. VR is a new medium which sheds many of the constraints of the 2D-on-screen, body-destroying movies we’re used to. VR movies can incorporate the body, can require the body, and have the viewer be an active, moving participant in the movie, instead of a passive onlooker, seated for hours.

(PS: look at where you watch movies in your home. I bet that space would be a little dangerous if you suddenly strapped goggles on and ran around in it for two hours, right? I think “VR” should be a transitional technology, kind of like how we started with black and white photography: it’s a great start, but we should strive for better. Likewise, VR is a blinder, and not just for your eyes: your other senses, like touch and smell, don’t really get to play at all; you end up waving your arms in the air with no sense of physicality. We should treat VR like the obviously transitional technology that it is, and demand dynamic environments that incorporate more of the body and its senses.)

There’s a great series of posts by Rishabh R Dassani about Sitting, Moving, and Standing. Without giving too much away, the gist is:

Sitting for long periods of time is detrimental to your health

You need to involve movement into your day

Standing desks aren’t really a great alternative; they have their own serious health issues

I’m someone who works from home at a sitting desk, so these posts have really hit home to me. In my default state, I sit for most of my day, for hours at a time. Working from home means I can even cook my meals in, so I don’t really need to go too far at all during the day (I try to get out of the apartment for lunch most days though, unless the weather is bad).

Though we should all be wary of folk-medicine, what Rishabh is describing seems to make some sense. Sitting or standing all day are deleterious to your health; the key is move around throughout the day.

I’ve been following his suggestions for a few weeks now and anecdotally I feel better (I’m probably not actually healthier, but it feels good to get up and moving again). Every half hour I get up and walk around my apartment for about 5 minutes. Once or twice a day I try to go outside for a longer walk (I’m not super diligent at this, yet).

This is good for now, but it’s not a long term solution. My line of work demands me sitting down, staring at a rectangle. When I heard Bret Victor’s “Humane Representation of Thought,” this part really stuck out to me (54:00):

It’s kind of obviously debilitating to sit at a desk all day, right? And we’ve invented this very peculiar concept of artificial exercise to keep our bodies from atrophying, and the solution to that is not Fitbits, it’s inventing a new form of knowledge work which actually incorporates the body.

I can’t change my line of work overnight. I can’t start making iOS apps by walking around in some kind of computational environment for knowledge work (but I imagine, given such a technology, people would still try to make apps for old media!), but I can think about it, and I can do my best given what I have. What do you think about that? How would you do your job if you couldn’t be stuck sitting at a desk all day?

So I started becoming obsessed with why there are so few girls and women in engineering and what I could do to change that. I started talking about it with everyone I could, and asking them, how did you get into engineering, and why are there so few women and girls? And I started to hear the same response over and over again: you can’t fight nature.

Seriously, smart, educated people would tell me, you know, there are just biological differences between men and women.

And they told me, you know, men just are naturally inclined toward building and engineering, they’re just good at it, you know? They’ve got spatial skills, they’re born to be engineers. Well, this really pissed me off. It did, I mean I got into engineering, does that make me a freak of nature or something?

Debbie did her research and discovered, of course, this is total bullshit. This is not human nature, but instead human culture. (Debbie, by the way, has since created an launched a successful engineering toy company for girls called GoldieBlox)

I hear the term “human nature” thrown around a lot as a defence, usually for something unjust. Racism, sexism, homophobia, they’re all “human nature” according to some people. And sometimes, it’s actually true that humans have innate, inborn tendencies right in our DNA which are unsavoury at best.

But none of those should really matter. Even if it’s in our DNA to be racist, or to fear others unlike us, or to think girls can’t be engineers, even if all of those things were true, none of that should matter, because culture helps us break through the limits of our DNA. We do have a human nature, but more often than not that term is used out of fear of change.

It’s culture, not our DNA, which acts as the driving force for what makes us truly “human” today. Anatomically modern humans have been around for 200 000 years but yet we’re vastly different from these ancestors because of our culture. We’ve transformed from hunter-gathers to city builders, to flying and space-exploring technological, relatively peaceful beings.

There’s both good and bad in human nature, but relying on either in the face of change is a losing strategy. It’s culture, not just our DNA, that’s going to make us better people and give us a better world.

A few weeks ago I was emailed by a student in grade 9 about a career in programming. She had some great questions and I thought I’d share my answers, in case you know any students considering a career in programming or software development.

What subjects/ courses were the most beneficial to you in preparing to become a computer programmer?

Math is kind of the most important subject for preparing to be a programmer, but maybe not for reasons you might expect.

Most programmers don’t use “math” very often in their everyday programming lives, at least not directly. So, you don’t really do calculus, or trigonometry, or really anything like that in most day-to-day programming.

But you do learn a few key things from math (which, by the way, were never clear to me in school! it was only much later in life, looking back, I kinda realized “Oh hey! that’s what’s so great about math!”)

Math is abstract

Numbers don’t “really” exist in the world: they’re something humans invented, as a model, for understanding quantities of things. So we take a concrete thing, like a bunch of bananas on the ground, and we create an abstract concept of “6” bananas so we can think about them without needing to necessarily always have that many bananas handy. Same goes for trig and calculus too. Planets don’t actually orbit around the sun according to the laws of Newtonian integral calculus, but we use calculus as a way to represent concrete things in an abstract form.

This is pretty much what programming is! Programming is all about abstractions.

In one sense this means one piece of code can output different things depending on what you input to it (like, Google will show you different search results for different searches, but it’s the same code that does it).

In another sense, this means you can think about things at different “levels” of abstraction, depending on what’s useful. For example, if you make an app that saves a jpeg to your computer, your code doesn’t have to think about how that “works.” Your code for saving the file is the same whether you’re saving on a normal harddrive, or a USB thumbdrive, or saving to Dropbox or whatever. That stuff is abstracted away from you as a programmer, which is very useful!

Math uses its own notation

Math kind of has it’s own “language” (x, y, square root symbols, matrices, integration symbols, etc). Programming kind of has its own language too, “code.” Code uses its own symbols (most of which are different from math), but just like it’s helpful to “think” in the language of math, it’s helpful to “think” in the language of programming (also like math, this takes practice!).

Math is probably the closest thing we have to breaking problems down (but it’s not super great at this)

Kind of like my point about abstraction above, math is useful as a way for breaking problems down into smaller parts, which is something you do all the time in programming. In math, you might do this when you’re asked to “show your work” but in programming, you use it more as a “divide and conquer” strategy.

Math teaches some logic (but not a lot)

Logic can be pretty important in programming (although sometimes overrated, some programmers are “too logical” and forget how to be humans). But logic helps you reason about how programs should work. Logic helps you say “I know it’s impossible for the program to produce incorrect results” (but, haha, you will often find human logic is faulty. A lot)

And depending on what sorts of things you program (the field is very broad!), you may actually use some forms of math.

If you do any kind of programming that involves putting something on a screen (like making apps, making webpages, etc), you’ll probably use some basic geometry from time to time (you can think of a computer screen as a grid: X pixels wide by Y pixels tall, so being able to think of a screen like a Cartesian plane is useful).

If you do 3D graphics (video games, computer animation, etc) you’ll probably use a bit of calculus, and a lot of linear algebra (which I’m not sure if they teach it in high school, I only learned it in university—it was like the only math class I actually liked!). But anyway, pretty much all 3D graphics you ever see are the result of doing stuff with matrices and vectors.

If you want to do things like artificial intelligence or “machine learning” (both of which are big topics for search / social network companies (e.g., advertising companies) these days), algebra and statistics are pretty useful too.

BUT! It’s programming really isn’t all math. There’s a bit of it, but it’s mostly foundational (which is lucky, because I’ve always been pretty bad at math; also it turns out computers are pretty good at math!).

Other subjects are quite useful too. A scientific mindset is useful because of its emphasis on investigation and scepticism. Graphic design / art is very useful if you want to make programs people use directly (apps / websites, primarily), because things shown on a flat 2D surface like a screen needs graphic design just as much as posters and magazine layouts do, too. Oh also English is great because well, you’ll still need to communicate with lots of humans when you work with them! Being able to read + write well is always useful, especially if you’re good at arguing for things with essays.

Around how many languages do you have to learn? Which ones are the most important?

(I’m assuming you’re talking about programming languages here, and not human spoken languages. If you’re talking about human spoken languages, English is pretty much the lingua franca of programming, but I suspect knowing Chinese and/or Hindi might be good ideas for the future too!)

The neat thing about programming languages is there are many of them, and which one you use depends on what kinds of programs you need to make (or, how you want to make it).

For example, if you want to make iPhone apps, you pretty much need to know either “Objective C” or “Swift” (these are the languages I use every day at my job). If you want to make websites, you pretty much need to know Javascript (you may also need to know others for web programming, but you definitely need to know Javascript at a minimum).

The good news is, once you learn one programming language, most others are pretty easy to learn, because at a fundamental level, all programming languages work pretty similarly. And while you’ll probably become a pro with a few languages, it’s never a bad idea to eventually become familiar with a bunch of them (not something you need to do when you’re starting out, of course, but good for down the road). Although all programming languages are essentially equivalent, many let you program in a different style, which often means you-the-programmer are thinking in a different mindset.

It’s kind of like with human spoken languages: each language has its own metaphors, it’s own way of expressing concepts. In some languages, it’s really easy to express some ideas but hard to express others.

What degree is needed to be successful?

The neat thing about programming is you don’t need a degree to be a programmer. The vast majority of programming jobs don’t require degrees, most are interested in experience (but there are loads of jobs for beginners too). If you want to work at Google, maybe they’ll require a degree, but the vast majority of jobs don’t.

That doesn’t mean you shouldn’t get a degree, of course! There are lots of benefits to getting one (or many!) but I just wanted to point out it’s an option.

The most common degrees for programmers are Computer Science (this is what I have) and Computer Engineering (if you do this degree in Canada, where I’m from, you’re legally considered an Engineer, you get a ring and everything). They’re kind of similar, but I’ll explain each a bit:

Computer Science: This varies depending on which school you go to, but at a fundamental level, Computer Science (CS for short) teaches you a lot of theoretical stuff about how computers (mostly software, but some stuff about hardware) work. You will learn programming, but CS won’t really teach you how to be a good programmer necessarily (that is, CS usually isn’t about on-the-job skills.. you mostly learn those making programs on your own, or, actually at a job!).

CS will teach you things like “how does logic work” (you’ll take classes like Discrete Mathematics where you’ll learn how to do logic and proofs), you might take courses on computer graphics (like how to program 3D things), you might take courses on computer networks (like, how does information get from one computer / device to another over the internet, across the whole planet?), algorithms and data structures (like, I have a lot of information, how do I put it in a format the computer can work with quickly? this is the sort of thing Google is made of).

CS will also leave you with a feeling of “oh my god how does any of this stuff actually work and not break all the time???” Which is both awesome and terrifying. Like, how do wifi signals work, through the air, invisibly, and so fast??

Software Engineering: I know less about this because I did a CS degree instead. But, SE is more focused on the “engineering” aspects of making software: things like reliability (how do I ensure my program won’t break?), doing specifications for clients (the government needs us to build them software, we need to figure out precisely what they need), that sort of thing.

Both CS and SE involve programming, and both do overlap a little bit.

I’d also like to suggest, though, thinking about different degrees (and yet, still being a programmer!). So as I mentioned, you don’t strictly need a degree in CS or SE to become a professional programmer. You will learn some programming in those degrees, but you won’t end up being a great programmer just because of those degrees. Like most things, being good at programming is usually a result of just lots of practice / experience.

But so anyway, I’d suggest at least exploring the possibility of other degrees too, because they’ll give you different perspectives on the world, which I think are desperately needed among programmers (self-included!). For example, most programming languages today are considered to be “object oriented” languages (which basically means every part of your code is a self-contained abstract object that represents a thing. You might have an object for a Person or a Dog or a Rocket or a Feeling.. basically anything!).

Anyway, the concept of making “object oriented” programming languages came not from someone who studied computer science, but from someone who’d studied biology (Alan Kay). Alan thought about making programs act kind of like cells in our bodies (self-contained, independent, but communicated amongst themselves).

My point here isn’t “do biology instead of CS” but consider taking something else because it will give you different perspectives while programming.

So, biology is a neat example (as are most of the sciences). The humanities are another idea, because, well, humans are important! Also having a degree that forces you to read a lot of books means you’re going to be exposed to a lot of ideas and perspectives.

What is your favorite aspect of programming?

My favourite aspect of programming is its potential to have vast and monumental positive impact on the world.

From a certain perspective, programming has already done this! Computers (laptops / phones / etc) are everywhere, and they’re connected through pervasive networking. And that’s super, incredibly amazing! As a consequence of this, information is much, much more accessible to all those connected (Wikipedia, Google, even looking up things on Youtube! And, I’m a little biased because I work for them, but Khan Academy is pretty great too!). Communication is more or less effortless: within seconds you can be texting or calling or emailing or video calling (!) anyone else on the planet (!!). None of this would be possible without programming, and that’s really, really cool.

But from another perspective, I think there’s a lot more positive impact programming can have on the world. You can look at programming as a “way to make things” (like apps, websites, etc) and it definitely is a way to do that. But you can also look at programming as a way of thinking, which is something we don’t really do very often with it.

For example, when we learn to read and write human language, we do so for a few reasons.

One reason is so we can record ideas: I can put my idea down on paper (or an email or whatever) and then later, someone (maybe even future-Me) can read it and understand my idea (and of course, the invention of writing revolutionized humans when that happened! yay!). This idea-recording is good for all sorts of things, like remembering stuff (like, I can write something down and my ideas will live on after I die, that’s cool!), communicating (like this email!), or expressing ideas (like novels or essays which make arguments).

But another reason we learn to read and write (and I think one we don’t so often talk about) is because when we write, it changes the kinds of ideas that we have. Like, the very act of writing something down changes the way we think, the way we express ideas. Writing an essay isn’t just recording words you would have otherwise said aloud; writing an essay uses language in a more structured way than if you were just speaking the words. And writing an essay lets you make an argument in a bigger way too: probably nobody will listen to you speak aloud an argument for an hour (and it’d probably be very hard to make a coherent argument that way, anyway!), but they can sit and read through a written essay.

Likewise, programming can be (though these days, it often isn’t) used to record ideas, too. And similarly, these ideas are different kinds of ideas than if you were just thinking them in your head, or saying them out loud, or even writing them down.

While writing lets you put down an idea word after word, one thing leading to the next, in a very linear way, programming can have ideas that aren’t just one thing after another, but are complex. And programming lets you simulate ideas too, so you can play with them and explore them (think about reading about how cities and traffic work in a book, vs playing a game of Sim City, where the city is simulated by a program. It’s not just dead words, but active, moving things you can poke around at and play with).

This is the kind of thing I love about programming!

I imagine a world where everyone knows how to program, not so they can become professional programmers, but so they can learn how to think about complex things (like, how does the ecosystem work? how do cities work? how do viruses like HIV or the flu spread?). Just like we teach everyone to write, but we don’t necessarily want everyone to become a novelist or journalist, I want everyone to learn programming too.

I have been looking into the topic of women and minorities in the field and discovered there haven’t as many in recent years. What do you think has contributed to that?

I think there are a few reasons, but I think the biggest one is there is unfortunately a lot of sexism (and other forms of excluding POC, etc) in the programming industry. Luckily there are a lot of amazing people working very hard to correct this in our industry. It absolutely will not happen overnight (and sadly, sexism and other forms of oppression aren’t just in programming), but there are good efforts being made by really good people.

Another reason, related of course, is programming is often something portrayed as “for boys / men” and not for “girls / women.” (Which I will come right out and say is a load of crap! There is absolutely nothing about a person’s gender / ethnicity / ableness / etc which would make them less good at programming). Again, there are awesome people working on this. There’s the Goldiblox line of toys, which are designed as engineering toys for girls (this isn’t strictly related to programming, but it doesn’t hurt!). There’s also Hopscotch (a company I used to work for), founded by two women who wanted to make programming for girls and boys.

So yeah, unfortunately the industry has a sexism problem, that lots of people (myself included) are trying to help, but that’s probably the reason why there aren’t as many women / POC / etc.

So there you have it, I hope my mountain of text is useful to anyone considering becoming a programmer. Software development has its challenges, both technical and social, but it’s still a wonderful field, and it’s improving every day.

If you had asked me in 2006, the year before the iPhone was unveiled, to design an “iPhone,” I probably would have drawn you something like these. My imagination would have been trapped, stuck in the mindset that portable devices from Apple looked like iPods of the time, and completely unable (or unwilling?) to conceive of what a 2007 iPhone) looked like. But once I saw what the iPhone actually looked like, well, of course it had to look like that. It felt obvious once I saw it. How could it not look like that?

This happens to me all the time. Pretty much any time I see anything from Bret Victor I’m similarly blown away. I had never even considered maybe programming should be visible, or learnable, or that you should create dynamic drawings or animations. I had never considered that the computer should exist in a physical world, not be trapped in a rectangle. But once I saw all these demos, suddenly I could see computing differently.

There’s a well-studied bias in humans called What You See Is All There Is, which, as the name implies, biases us to only consider new things in terms of what we’re already familiar with. I couldn’t see the iPhone in 2006 because I was only familiar with iPods then, and I wasn’t trained to avoid this bias, either.

This is why so much of technology is the same old, year after year. It’s why programming languages are all plain text based, because that’s what programming language designers are familiar with. It’s why every mail app has 3 columns: mailboxes, then messages, then the message. These are almost certainly not the ideal forms, but it’s what we seem to be stuck with.

I still don’t know exactly how to avoid this bias, how to see beyond what I’m already familiar with, but I think I have some clues.

The first is studying design. It seems to me one of the goals of design is to see beyond what’s immediately familiar to push forward into something new. Design tries to get to the heart of needs before it starts looking at solutions, and that seems to be a good mindset for thinking beyond what you currently can see. (See also Igne Druckrey’s Teaching to See)

Another useful mindset for seeing is books. Every book I read reveals to me a new perspective, a new point of view on some aspect of the world. With novels, I feel empathy for characters in different situations; with non-fiction I learn in depth about the topic of the book. (See also “Reading Tip #1” in Bret’s reading list.)

I still don’t really know “how to see” beyond my immediate surrounding, but I feel like with every new mindset and perspective, I get a few more glimpses of what’s out there. But I’m curious, what do you use to see beyond what we’ve currently got?

There have been several rumours posted in the past few months of a team inside Apple working on a version of Xcode for the iPad Pro. Since programming on touch screen devices is kinda my thing, I’m very excited about this possibility!

Apple claims the iPad is its vision of the future of personal computing, but I worry how exactly Xcode for iPad fits into that vision in practice. This essay explores those rumours and looks at the true potential of iPad as a vision for the future of computing.

It’s hanging out on my sorta-launched portfolio website, Why Not Fireworks?, which is pretty boring right now, but I intend on adding stuff to it in the coming months. There’s currently no RSS feed for it yet, but in the meantime you can subscribe to the newsletter or just patiently wait here.

The essay was a ton of fun to write, and I hope it’s just as much fun to read. What do you think about it?

where I say “I could do X if only I had this ideal way to do X.” Like, I’d write more if only I had the perfect writing environment. I’d blog more if only my blog supported offline editing and static pages and but also could be edited from Dropbox from my phone.

The big way for me to work around this is by getting letting go of my need to have any kind of ideal setup and instead opt for the lowest friction setup I can make work. That way, instead of worrying (read: procrastinating) that I don’t have the perfect tools, I tell myself “that perfect tool will probably not benefit me that much anyway” and I see how far I can get by with something easier, at least to get started.

When I wrote about starting a “developer diary,“ I mentioned how I use a regular markdown file in a regular text editor. There are tools specifically designed for this job, and they’re probably great! but instead of spending hours trying to figure out which was best, what I really needed was to start writing so I could get my work done. I keep a list of potential topics to write about in a todo app because the tool was handy and was good enough to jot my ideas done. I’ve been dreaming of a spatial-graphical-hypertext-wiki for ages as the perfect way to capture stuff from my head, but for now I just jot them down in text files. None of these things are ideal but they let me actually do the thing I actually want to do.

This sort of low friction mindset also melds well with prototyping. Instead of trying to setup a whole perfect environment to prototype an idea, go with the lowest friction version of it. Maybe this means writing really janky code, or maybe it means not writing any code at all! I wanted a way for my computer to remind me to take screen breaks every half an hour, and my first instinct was to write a little app. This certainly would have taken hours to get jussssst right, but instead I remembered OS X has a setting to speak the current time every half an hour. Bingo, that’s not ideal but it’s pretty much as low friction as you can get.

Going low friction is essentially “just get it done,” which is hardly a new concept. In some ways it’s Keep It Simple, Stupid. In other ways it’s like a Minimum Viable Product (for itty bitty mini “products”). And it’s definitely still worth investing in better tools in the long run, or else we’ll be stuck with shitty text editors, todo lists, and plain-text programming languages forever. But for now, going low friction has been really helpful for me, it reminds me to let go of my urge to nerd out and instead just get moving.

There’s this thing I like to call the “ideal setup,” where I say “I could do X if only I had this ideal way to do X.” Like, I’d write more if only I had the perfect writing environment. I’d blog more if only my blog supported offline editing and static pages and but also could be edited from Dropbox from my phone.

I get all worked up over this ideal setup in my mind that it prevents me from actually doing the thing I wanted to do in the first place (or, maybe, it’s a symptom I didn’t really want to do that thing in the first place). It’s a form of procrastination, in a sense, because I’m justifying not doing the thing by the fact that I’m somehow lacking the proper tools.

To be clear, I know that tools make a difference. Writing in a plain text editor vs writing in some kind of graphical spatial hypertext wiki (ah, my true ideal setup) is going to alter the way you think and the kind of work you will produce. A responsive, visible programming language is going to let you create vastly different programs than the invisible, text-based coding languages we blindly stumble through today, too.

But there comes a point when the difference between what you’ve got, and that ideal setup is small enough to just be slowing you down. It’s kind of like that whole “a great guitarist can be amazing even on a crappy, out of tune old acoustic found in your basement” sort of thing. But it’s also kind of like “you’d be surprised how much you can accomplish if you just got started with what you have, for now.”

In the next post, I’ll go over some of the ways I try to work around this fun little aspect of me. In the meantime, what’s your ideal setup? How do you work without it?

A few years ago, while attending the NSNorth 2013 conference in Ottawa, a fellow Canadian friend and I talked about one of the speakers we’d heard the night before. “Did you notice,” my friend said, “how when he was referring to business owners he kept calling them businessmen?” I had noticed, and it stuck out to me like a thorn. In fact, I’d been noticing this a whole bunch since moving to the US in January of 2013.

When I was growing up in Canada, I remember saying something like “the policeman” and being corrected by my mother. “We don’t say policeman, we say police officer because there are women who are police, too.” I was like, five years old, but that made sense. “What about a fireman?” I asked, to which my mother replied “We say firefighter instead.” And on goes the list, * councillor* instead of councilman, chairperson instead of chairman, letter-carrier instead of postman. There’s no need to gender these professions, just like we don’t have “dentistman” or “lawyerman.”

But in the States, I’ve noticed the language here is much more gendered. The NSNorth speaker repeatedly referred to businessmen, but I hear it on the news and in everyday conversation all the time. At the very least, it’s good that if a woman holds the given gendered-job, she’ll be referred to as a “chairwoman” but it just seems unnecessary to begin with. Plus, by using gender-binary terms, it leaves out people who don’t identify as “man” or “woman” and reinforces those as the only two acceptable ends of the gender spectrum.

Now I’m not saying “all Americans use gendered terms” and I’m also not saying “all Canadians use non-gendered terms,” I’m merely saying, anecdotally, I’ve observed more gendered language in the US than in Canada.

This isn’t an attempt to police language, but I am trying to point it out so you can think about it, reflect on your own language usage, and how it affects others. It feels like a small thing, but language matters, especially in the software industry, where we’re majorly suffering from a lack of diversity.

There’s a lot of talk in programming about challenges and eventual successes, about how a given problem stumped a developer for a long time, but eventually they overcame it and shipped it / blogged about it. But I think that paints a lopsided view of programming because aside from “outage post-mortem” posts, we rarely talk about the failures in programming. We rarely talk about how programmers struggle and fail at things, even though it’s a pretty natural and regular part of our job (and any job, really).

So this is my inspirational post about failing.

About a month ago, I started a project at Khan Academy to implement iOS State Restoration in our app. State Restoration is that feature where your app is supposed to pick up on the exact screen where you left off last time, even if the app happens to have quit in the background in the meantime. To the person using your app, it’s been running the whole time.

It was scheduled as a two week project, but I set a personal goal to do it in one (o’ past-Jason, you were so young). I tried to do everything right, from the start: I looked up relevant documentation on Apple’s site, which is decently comprehensive, if a little confusing at the start (when you’re not accustomed to the lingo of a given feature, many of the different parts sound the same. see also Core Data). I watched the relevant WWDC sessions (2012, 2013). I planned out the project in a task manager, trying to break things down into small logical pieces.

Pretty much all my prep work was successful. I did the right things. But State Restoration was still way harder than I expected.

Apple says it “Just works” but what they really mean is “If you use Storyboards and your view controllers use mutable, optional references for everything, it Just works.” I’m not really trying to crap on State Restoration, because it was built with Objective C in mind, and in that frame of reference, yeah, all your view controller references are going to be mutable and “optional.”

Most of the Khan Academy app is written in pretty Swifty Swift, which means we use immutable (let), non-optional references (or when possible, values (struct and enum)) in our view controllers. As a consequence, that means we also inject most of what our view controllers need right in on init. State Restoration kind of wants us to return initialized-but-bare view controllers, which can then be setup at some point in the future with restored data (like say, the ID of the object shown by the controller).

Unfortunately, we didn’t really distill the above two paragraphs until the project had already gone over its allotted time by a week. In the third week of the project, we came to those realizations, and I tried re-writing one of our view controllers to see what it would be like (spoiler alert: optionals are infectious; mutable references throw a wrench in everything).

The project was overdue and my solution didn’t seem feasible to redo throughout the entire app, so we decided to call it: continuing to implement the feature beyond the basics (the main UI of the app correctly restores, at least) currently isn’t worth the effort. There are better ways my time can be spent helping learners using our app.

This kind of sucked for me. I’ve been writing iOS apps since 2008. I’m a professional software developer and I’m pretty good at it these days. I even had a lot of help from a former UIKit engineer. But still, I struggled quite a bit with this.

My point here isn’t really about State Restoration (I’ll bet there are solutions to making it work in a Swiftier world, or if not, I’ll bet there are third-party solutions that work better). My point here is really just programming is hard, even people who have been doing it for a long time still struggle from time to time. And we don’t talk about this very much. It’s a little embarrassing, but it’s definitely natural.

I wish we talked about failure more often, because it’s really not as bad as it seems. Succeeded or failing, I’m still learning things either way, and I’m pretty sure I learn more when I fail than when I succeed anyway (or at the very least, as the phrase goes, if you’re not failing every now and then you’re probably not doing anything too interesting). But when we only talk about succeeding, we paint an inaccurate picture of what it’s like to be an experienced programmer, and that harms pros and newcomers alike. So lets talk more about failure. How have you failed?

This is a variable (or parameter or @property) that says “I need a thing that conforms to the Droppable protocol and which is some kind of UIView.” Basically, you need to be able to treat this thing like a view, and also like a Droppable.

It turns out this is kind of hard to do in Swift! There’s no natural syntax to express this, but here’s a trick we use at Khan Academy that helps us accomplish the same goal (it’s not exactly the same thing, but it’s close enough to be useful):

We make our protocol like usual, but we add an extra property to represent the view:

So, anything conforming to Droppable is expected to also return a view. Now when we have a Droppable property, it has-a view (which isn’t exactly like the Objective C way, where Droppable is-a view, but it’s close enough), and we can access this view with a simple myDroppable.view.

But it’d kind of suck to have to implement this view property on every view conforming to Droppable, so we then use Swift’s protocol extensions to give an implementation to all UIViews in the module:

I’m a guy in my mid-twenties who lives in New York City and works as an iOS Developer. This seems to be a perfect trifecta for putting me in a culture which drinks regularly. To a certain extent, I’m completely fine with this: I usually enjoy alcohol and I usually enjoy those around me who have had some (some) too.

The thing is, I don’t always want to drink, but I often feel compelled to by the people around me. I’ve noticed while drinking, the only two acceptable ways to be with drinkers are: you are either drinking with them or, you are someone who does not drink. But if you’re someone who sometimes drinks, it’s expected that you’ll be drinking with the group.

While I’m glad that it’s generally respectable to be a non-drinker (for people in Recovery or for those who just “don’t drink"), I wish it were acceptable to say “I don’t want to drink tonight.” If you’ve tried this before, you’ll no doubt be familiar with the reactions of the group “What do you mean you’re not drinking tonight? C’mon, have some drinks!”

You could argue the people asking you to drink mean well, and I think they think they mean well, too. But I often find this reaction comes from a place of not wanting to be judged. It seems if I tell people I’m not drinking on a particular night that I’m judging them for drinking, too. In reality, I’m respecting their choice to drink and hoping they’ll do the same to me. I mention this not in reference to anybody in particular, just towards the drinking culture in general (I feel like I’ve unfortunately been on both sides of this).

I’ve always interpreted the The Strange Case of Dr. Jekyll and Mr. Hyde to be about alcohol (the guy drinks an elixir and starts acting wildly different; eventually he becomes dependent on it), and sometimes when I drink I feel a bit like Mr. Hyde, too. No, I’m not violent, but I’m different. And that’s enough of a reason for me to sometimes not want to drink on a given night (or month).

Ultimately, the reasons shouldn’t matter. Maybe the person has troubles with alcohol, maybe they’ve got a stomach ache, or maybe they just don’t want to drink because they don’t want to drink. It’s really not about judging anyone else, it’s about what that person wants to do that should matter.

In “Walking Around in My Thoughts” I described how getting thoughts out of your head and into some form of reality can be a really helpful thinking tool. Today, I want to look more specifically at the tools (I’m referring to “media” here as “tools” because I’m looking from a perspective of using media to accomplish goals) used for getting these thoughts into reality.

Talking about your thoughts forces you to give order to them, to arrange them and give them some semblance of meaning. By writing your thoughts down, you can explore them in a linear fashion, building and exploring arguments and one-to-one cause and effect. Drawing or painting a picture lets you explore and react to ideas visually, seeing how different parts of your ideas will compare and relate to one another. These are some of the most common and at-hand tools we have for thinking with our thoughts.

But these tools are primarily for thinking static thoughts: ideas which kind of exist, but don’t change. If I’m trying to think about a flock of birds, a picture is only going to let me think about a small moment in time, not the movement of the flock or the interaction of the individual birds within. Writing lets me put down a description of what’s happening, but it’s hard for me to react to that description, as I’ve got to do a lot more work in my head to put the thoughts back in motion.

You can sorta do it with these tools, but they’re not really great at it. What I’m interested in are tools specifically designed for thinking about complex or dynamic thoughts, and these have been much harder to come by. The only tool that really springs to mind is programming, but it’s not so great either.

Programming sorta lets me think about dynamic thoughts: I can program a simulation of a flock of birds, or a traffic jam, but most of today’s programming environments make this quite a chore. I’ve got to set up a program, figure out how to get some kind of graphic on the screen, figure out how to model all the pieces, figure out how to make those models interact with each other. And that’s assuming everything goes perfectly well and there are no bugs to yank me out of my thoughts. With programming, I’m more often thinking about how I’m going encode the thoughts, and rarely about the thoughts themselves.

This is a terrible shame, because each tool we have for thinking with is a different perspective, a different vantage point for making sense of our thoughts and of reality, and computers are excellent at simulating these different points of view, but programming is so obtuse it’s almost impossible to use the computer in this way. Programming ought to be a flexible method of representing and manipulating these perspectives, not a cumbersome way of managing computer resources.

There are many tools for walking around in your thoughts, but there’s no reason to believe we have all the possible tools. We change our tools and then our tools change us, right?

In Friday’s post, while talking about “rubber duck debugging,” I briefly alluded to something I called a “developer text file.” I thought I’d elaborate on that a little bit today.

For many of my projects I keep what I call a “developer diary,” usually just a plain old markdown file which I use to “walk around in” my dev-related thoughts for that project (Quiver looks like a neat tool for this too, but I haven’t tried it. I suggest going with whatever is the lowest friction.)

I usually make a new header every day (“February 15 2016”) and try to summarize what I’m working on, what’s going well, and what I’m struggling with. I’ll also add notes for things I still need to get done, and maybe where I might start tomorrow, as well.

Finally, I often use these text files as a scratchpad for things I’m working on. If I need to debug a lot of console data, it’s often helpful to paste it in my dev file so I can clean it up and annotate it. This helps me see a clearer picture of what I’m trying to debug, and helps me compare data while I make changes.

I’ve been finding this sort of file crucial while working on a remote team. The rest of my team is 3 hours behind me, so being able to have great communication with myself is essential while debugging. I get a chance to think through my problems more clearly, and if all else fails, I can always send them today’s snippet of my dev diary (as a Github gist) so they can catch up on my thoughts, too.

What do you use to help you walk around in your thoughts while developing? If you don’t use something like a dev diary, I highly suggest you fire up your favourite text editor and give it a shot.

When I was younger, I remember proudly positing to my penpal that “the thoughts in your head are in their purest form, written thoughts are only ever an approximation.” Although this seemed to make some level of sense to me at the time, I pretty strongly disagree with it today. Instead, I see head-thoughts and written-thoughts as entirely different beasts: not only does writing put your thoughts in a different form, the act of writing makes you think in an entirely different way.

This is what Marshall McLuhan means when he says “the medium is the message.” It’s not just about the “container” of your ideas (your head vs written words), but that the medium itself changes the way we think so much, that for all intents and purposes the medium we use is the message we’re trying to convey.

As an example, I often have ideas in my head on how to solve a problem. I think I know a lot about the problem, like who’s involved, what’s causing it, what the effects of the problem are, maybe a possible solution. But once I start writing about the problem (in an essay or maybe a developer text file, depending), suddenly the problem changes. Suddenly, I can see new parts to the problem in ways my head-thoughts couldn’t.

If you’re a software developer, you may have heard of “rubber duck debugging,” where just by explaining (in writing or out loud) your problem to someone or something (maybe even a rubber duck sitting on your desk), you’ll often figure out the problem on your own.

There’s no magic in this, but rather I think it’s a consequence of putting your thoughts in the world and walking around in them (this phrase is lifted from the fantastic Double Fine Adventure documentary episode of a similar name).

And of course, walking around in your thoughts is not restricted to just writing or saying them out loud. The useful thing is putting your thoughts into some kind of reality so you can make use of them. You might do this with written or spoken language, or you might do this by drawing or painting a picture, or by playing a song on the piano or guitar, or by programming a prototype or spreadsheet. Each of these media are going to let you think differently, but they’re all objects to think with.

This is why writing or sketching or prototyping or whatever is so important: yes it’s about communicating ideas to other people, but I think just as importantly (and often overlooked), it’s also about communicating these ideas back to yourself. You have all these channels of reality, like your sense of vision or space, and you don’t get to use those as well when you’re only thinking in your head. But if you draw a picture of your idea, suddenly you are literally seeing the idea in a different way.

Every time there’s a mass shooting in America, which is unfortunately pretty often these days, I see a lot of people on Twitter feeling angry and hopeless. They’re angry because guns are dangerous, guns kill people far more often than they save them (almost never), and there’s a whole culture in America devoted to keeping their guns. More frustrating still, it seems impossible to change this in America. The best Americans are asked to expect from serious gun reform is fewer guns, not 0 guns.

The discussions on Twitter often seem useless: you’re either preaching to the choir of your followers who agree, guns are bad. Or you’re in a shouting match with someone who wants to keep their guns, thank you very much. And it’s not just Twitter where this happens, it seems to be everywhere the topic of guns comes up, everybody already has their mind made up. So how do we change this?

Babe give you kisses if you hit a rubber duck nowGuns, guns, guns

I think the first step is to try to understand where everyone is coming from. I think broadly speaking, whether you like guns or not, nobody wants to see anyone die. People who oppose guns (like me) want to get rid of them all because guns are designed to kill, end of story. People who want to keep their guns think guns will keep them safe from intruders or other violence.

Second, everyone thinks they’re right. Anti-gun people think they’re right because ultimately guns are made to harm, and harm is bad. Pro-gun people in America argue not only are they right, they literally have a right to guns. My point here isn’t so much both sides think they’re right, it’s that this polarizing thinking is part of the problem: we’ve created two “sides” here. You either agree with me or you’re a “gun nut,” a phrase which just serves to other the pro-gun group, which helps nobody.

Third, these two sides each pro- and anti-gun cultures, more than the sum of their individual members. I think this is the big part we miss in most discussions about guns. It’s not so useful to convince an individual to get rid of their guns, you need to dismantle the whole of gun culture to see real change. Gun culture isn’t just someone who thinks guns are a good idea. Gun culture is that someone, and all of their friends who also like guns. It’s a set of values and a way to fit in with a group (same goes for the anti-gun culture).

You be the red king, I’ll be the yellow pawnGuns, guns, guns

I’m not a sociologist and I don’t know a whole ton about changing cultures, but it seems like if you want to disarm gun culture, you’ve got to do more than just complain on Twitter. Read up on activism, learn how cultures have changed for the better in the past (Women’s Suffrage and Civil Rights movements spring to mind). It won’t be easy, but it is possible.

When Swift went open source, Apple also open sourced the process by which Swift changes: the swift-evolution mailing list and corresponding git repository. This is huge because not only does the team share what they plan to do to Swift in the future, they’re also actively asking for feedback from and talking with the greater developer community about it.

Apple clearly wants Swift to be the programming language for the coming decades, not just for its own platforms, but for systems and application programming everywhere. And so to help it achieve this, Apple is open and listening for feedback on the mailing lists.

There is one problem with this though: the mailing list is self-selecting. Generally speaking, if you’re an active participant of the the swift-evolution list, you’re already quite bought in on Swift. This is generally a good thing, since being bought in on Swift means you care a lot about its future, and you have a lot of context from working with current Swift, too.

But by only getting suggestions and feedback about Swift’s evolution from those bought in on Swift, the Swift team is largely neglecting those who are not already on board with Swift. This includes those holding out with Objective C on Apple’s platforms, and those using different languages on other platforms too. These groups have largely no influence on Swift’s future.

Let’s take a recent example: the “Swiftification” of Cocoa APIs. The basic premise, “Cocoa, the Swift standard library, maybe even your own types and methods—it’s all about to change” might be good for Swift programmers, but I imagine stuff like this is the exact reason many Objective C programmers avoid Swift in the first place: they quite like how Objective C APIs read. Although the Swift team is actively asking for feedback, my guess is many Objective C developers’ reactions to this would be immediate rejection. And that sort of thing doesn’t bode well for providing feedback on swift-evolution. And thus, Swift becomes less likeable to these Objective C developers.

What to do? I’m not entirely sure, but the first thing that springs to mind isn’t a swift-evolution, but an apple-platforms-evolution list. If Apple eventually wants to get Objective C programmers using Swift, I think the discussions about Swift’s future needs to better include them.

When I was a kid, I distinctly remember saying “Grandpa! You can’t say that!” whenever my grandfather would say something a little…dated. It wasn’t that he was a bad person, just that he came from a time when certain remarks (about race, gender, ability, etc) weren’t considered inappropriate by his social groups. But they had “become” inappropriate to my ears (I’m not trying to excuse anything he’d said, just trying to explain that to him, there was nothing wrong with the words he used).

The thing is, I know there are things we say today that future children will scoff at. They’ll tell me “you can’t say that!” and I’m trying to keep my mind open so that if it happens, I can learn from it.

But I want to get a head start on that now. I’m trying to figure out the words and phrases that’ll become socially unacceptable to use in the future so I can stop using them now. And of course, if I think something will be inappropriate in the future, that’s a pretty strong indicator it is inappropriate already. I’ll also note this goes far deeper than just the words I choose to use: there are entire mindsets that go behind these words as well, and I want to steer my mind well away from them.

So here’s my candidate list of things I shouldn’t say (which I’ve already been in process of removing from my language), in no particular order:

“Third World” From Wikipedia, “The term Third World arose during the Cold War to define countries that remained non-aligned with either NATO, or the Communist Bloc.” This seems like a pretty good candidate for not saying. The “third world” seems to describe a destitute world unworthy of discussion; total othering. Try suggestions from this article instead?

“You guys” Using ‘guys’ as a generic plural word for a bunch of people of any gender. There are numerousproblemsdescribing a group of mix-gendered people as ‘guys’ but I think there’s also a problem of describing a group of all male people as ‘guys’ because it reinforces male as the norm. Even a statement like “the guys on the Khan Academy iOS dev team,” while true we all identify as male, using the term “guys” reinforces the idea that we should be all guys. Try “folks,” “team,” “comrades,” or simply “people” instead?

“That’s crazy!” I’ve linked to this great post by Ash Furrow before about using words associated with mental illness to describe something unbelievable. This is pretty problematic and unempathetic language. Try “ridiculous,” “outrageous,” or “magical” instead?

These are the ones which immediately sprang to mind and which I’m already trying to remove from my lexicon. They may not be widely socially unacceptable yet, but I’m pretty sure they will be soon enough (this is a good thing!).

What’s on your list? What do you think your grandchildren would scold you for saying that you say now?

Here’s a weird thing I do: sometimes I procrastinate doing very small things, for no real reason at all.

I’ve read lots about procrastination (which is a fun way to procrastinate, by the way), and in general, it seems we procrastinate because distractions are so much more instantly gratifying and pleasurable than doing the Tough Work we need to get done. That makes a certain amount of sense and I definitely battle with this (hello Twitter, Instagram, Tumblr, and the whole goddamn internet today). But the kind of procrastination I’m talking about is different.

Sometimes I’ll have a small task to do, like “message my friend and tell him we’ll meet him at 7 for dinner.” That task couldn’t be easier. Look, he’s even online right now. Just, message him! And yet, oddly, my brain says “oh this is so easy, I can just do it later, no rush no worries.” Or maybe I’m writing some code and I get a compile error that I’m missing a character. It’s a one letter fix, but I say “oh that’s easy, I can fix that in a little bit” and I go do something else to bide my time.

I don’t entirely know why I do this. At some level, my brain knows the thing to do is easy, but it won’t do it. It kind of feels like my brain thinks because the task is so easy, it’s not really worth doing at all?

It kind of reminds me of this TED talk by Derek Silvers, where he says “telling someone your goal makes you less likely to do it.” Maybe that’s it? Maybe my brain thinks a task is so easy that it’s already done?

Today I stumbled across “No UI is the New UI,” an article extolling the upcoming demise of the traditional Graphical User Interface, in favour of text messaging interfaces like Facebook M and “Magic.”

Tony Aubé writes:

The rise in popularity of these apps recently brought me to a startling observation : advances in technology, especially in AI, are increasingly making traditional UI irrelevant. As much as I dislike it, I now believe that technology progress will eventually make UI a tool of the past, something no longer essential for Human-Computer interaction. And that is a good thing.

While I think conversational interfaces, whether powered by natural or artificial intelligence, have a long and prosperous life ahead of them, I don’t think they should replace traditional interfaces in most cases.

Conversation is fantastic for certain tasks, like requesting or negotiating certain broad information. Negotiating with software about which restaurant you’d like delivery from is probably much nicer than filling out a web form, but you can do so much more with a computer.

(Also, have you noticed how many of these conversational UI products in North America are really really first-world-problemy? Arrange my flights, send me food, clean my house. Oof!)

He continues,

As a designer, this is an unsettling trend to internalize. In a world where computer can see, listen, talk, understand and reply to you, what is the purpose of a user interface? Why bother designing an app to manage your bank account when you could just talk to it directly?

I’ll tell you what the purpose of a user interface is: it’s to provide much richer information, most of it visually, and to allow for deeper interaction with the thing you’re trying to understand. Let’s look at both of these:

The most important part about the Graphical User Interface is that it’s graphical. The eye is crazy fast at soaking up information. The eye can see shapes and colours, can determine hierarchies of importance and can compare choices like nobody’s business. We also have a type of work dedicated to they eye that’s been worked on and studied for centuries, called Graphic Design.

For spoken-word conversational UIs, you get one morsel of information after another. You can’t go back, you can’t make comparisons, you’ve got to remember it all. Everything must be described. From this mode, “huge” and “tiny” are arbitrary sounds whose meanings don’t seem as different as they are meant to be.

For written-word conversation UIs, the information is a little bit spread out, you can technically read backwards, but you’re still left with arbitrary symbols (we know them as letters and numbers) trying to relay information.

That’s the first part, the visual richness of graphics. The second part is how we humans interact back with the computer. In the graphical user interface, we have pointing devices (mice, fingers, pencils, etc) for indicating our interest and for exploring information. This allows us to directly manipulate the thing we care about. We can point at them, select them, move them, apply things to them. The list goes on. This lets us arbitrarily manipulate things in space, whereas with conversational UIs we end up needing to manipulate things in time, like we do word after word and sentence after sentence.

The overarching theme here isn’t that graphics are better than text, or buttons are better than SMS, it’s that these different interfaces force us to think in very different ways.

We change with our tools, with the media we use for communication. We communicate with one another, yes, but we also communicate with ourselves. The representations we choose help us think. The graphical user interface as we know it today is far from perfect. It will continue to change, but graphics are here to stay as an important part of our society.

Last week, Soroush Khanlou published “MVVM is Not Very Good,” wherein he describes the “Model-View-ViewModel” pattern as the iOS Community defines it, and considers it:

an anti-pattern that confuses rather than clarifies. View models are poorly-named and serve as only a stopgap in the road to better architecture. Our community would be better served moving on from the pattern.

Today, Ash Furrow published his thoughts on the matter in “MVVM is Exceptionally OK,” wherein he agrees with the main tenets of Soroush’s article, and suggests some modifications to the pattern:

MVVM is poorly named. Why don’t we rename it? Great idea. MVVM is a pretty big “umbrella term”, and precise language would help beginners get started.

Both articles, I think, are really arguing the same things, but maybe from different directions:

Plain old MVC is leads to huge, disorganized, unmaintainable code (most of it in a UIViewController).

We need better ways of organizing our code.

MVVM, as the iOS Community interprets it, is not really good enough.

Of the defences I’ve seen for MVVM, almost all of them suggest with some tweaks, MVVM is actually a good thing. I think, broadly, this is true, but you can’t really change MVVM without it becoming something that is no longer MVVM; the original MVVM remains, as Soroush argued, “not very good.”

What’s important to remember is the point of things like MVC and MVVM in the first place. In our industry we tend to call these patterns, but I prefer to call them perspectives: they are mental tools which allow you to see a problem from a different point of view. It’s the old “when all you have is a hammer, everything looks like a nail” routine, where the joke is your hammer is the only perspective you have on the world. The solution to seeing everything through the eyes of a hammer isn’t to start seeing everything through the eyes of a screwdriver, or a paintbrush, but instead to see things from many perspectives.

Programming is a dark goop, hard to see under just one light. But from that goop also emerge new ways to see. Every time you form a new abstraction, no matter how small the method or how collosal the class, you build for yourself a new point of view.

MVC and MVVM are but two narrow views on what we know as programming, a confusing mess in desperate need of more light.

The general way to interpret this is “writing about programming topics is hard” (OOP vs Functional? Swift vs Objective C? Should you use goto?) and yes, it’s very hard to write about those things! (how do you talk about a program? how do you choose which parts to show? why can’t a reader execute and explore my program?) But this road is at least well travelled, and I feel like I can do so decently, myself.

But harder for me, at least, is writing about programming itself. What is programming? Should everyone learn how to program? What does it mean to learn programming? Is that learning a given language? Is that learning about if statements and map? Is that learning about algorithms? Is that learning about git? Should we treat existing languages as static? Are they all there is to programming?

Is programming for software developers? Is programming a way of undertanding problems or a way of causing them?

I’ve worked on programming environments at Hopscotch and Khan Academy. I’ve been through my share of “Hour of Code” ritualized learnings. But I still haven’t found answers to any of those questions.

Last month I took fun, but fruitless step towards figuring out some of those questions, and maybe some of those answers. The talk I gave at Brooklyn Swift was a blast, but I think I left with more questions for myself then when I started (appoligies and thanks to the audience for any bewilderment they certainly experienced).

The video for that talk is forthcoming, but I’m also trying to refine these ideas in a more presentable, readable format. I may not have the right answers, but I promise to at least ask to right questions.

One way that Backchannel’sSDK maintains its readability is through simplicity. No class in Backchannel is longer than 250 lines of code. When I was newer at programming, I thought that Objective-C’s nature made it hard to write short classes, but as I’ve gotten more experience, I’ve found that the problem was me, rather than the language. True, UIKit doesn’t do you any favors when it comes to creating simplicity, and that’s why you have to enforce it yourself.

This is a fantastic guideline and a wonderful post. I stick to a similar guideline in my code and had been wanting to write an article about it for a while. Soroush saved me the trouble.

The exact number isn’t what’s important here (my guideline is to keep Swift files under 100 lines), what’s important is giving yourself a metric, a general feeling for when a piece (class, struct, enum, whatever) of your code gets too big. For me, when a piece hits about a hundred lines it’s generally time for me to start breaking things out into smaller pieces.

A hundred lines or less (including doc strings, liberal whitespace, and no functional funnybusiness) keeps my code well structured and highly readable.

A year ago I gave myself a challenge: read a thousand books in my lifetime. I decided to start counting books I’d read since November 14, 2014 (although I’d read many books before this, I really only wanted to start counting then, so I could better catalogue them).

I’ve completely fallen in love with reading since finishing university. I always had lots of books around as a kid, but I think I enjoyed collecting them more than I enjoyed reading them. And having them imposed on me by school didn’t help either. So I didn’t do much pleasure reading throughout my school years.

A few things changed when I got out of school. For starters, nobody was telling me what I had to read, so I could do what I wanted. Secondly, my then-girlfriend (now wife, yay!) is an avid reader, and that encouraged me to do the same. Finally, I was starting to read more and more interesting things in Computer Science and realized if I wanted to do anything interesting in my career, it’d probably help me to be well read. I felt like I had a lot of catching up to do, but over the past few years, my newfound love of reading has been a fulfilling experience.

A thousand books is a lot for me. I’m not a super fast reader, but it seemed like a good goalpost to work towards in life. If I were to read a book a week (roughly 50 per year), it’d take me about 20 years to read a thousand. No easy feat! This year I managed 24, about two a month on average. Some are graphic novels which are easy for me to zip through, while others are dense books on education theory, which tend to be a slog. Most are comfortably in between, but nearly all of them have been enjoyable.

The books are, in order:

Mindstorms by Seymour Papert

Brave New World by Aldous Huxley

Toward a Theory of Instruction by Jerome Bruner

Changing Minds by Andrea diSessa

How to do Things with Video Games by Ian Bogost

The Circle by Dave Eggers

Annotated Declaration of Independence + US Constitution by Richard Beeman

Experience & Education by John Dewey

A Brief History of Time by Stehpen Hawking

Best American Comics 2014 by Scott McCloud

Making Comics by Scott McCloud

Shades of Grey by Jasper Fforde

A Theory of Fun for Game Design by Raph Koster

The Sixth Extinction by Elizabeth Kolbert

Sirens of Titan by Kurt Vonnegut

Geeks Bearing Gifts by Ted Nelson

The End of Education by Neil Postman

Thinking Fast and Slow by Daniel Kahneman

Ode to Kirihito Part 1 by Osamu Tezuka

Scott Pilgrim Volume 1 by Brian Lee O’Mally

The Educated Mind by Kieran Egan

Dr. Slump vol 4 by Akira Toriyama

Maps of the Imagination by Peter Turchi

Headstrong: 52 Women Who Changed Science-and the World by Rachel Swaby

Here’s to the next 976!

Modern Medicine. A great essay by Jonathan Harris on how software shapes us, and the responsibilities latent in our industry:

We inhabit an interesting time in the history of humanity, where a small number of people, numbering not more than a few hundred, but really more like a few dozen, mainly living in cities like San Francisco and New York, mainly male, and mainly between the ages of 22 and 35, are having a hugely outsized effect on the rest of our species.

Through the software they design and introduce to the world, these engineers transform the daily routines of hundreds of millions of people. Previously, this kind of mass transformation of human behavior was the sole domain of war, famine, disease, and religion, but now it happens more quietly, through the software we use every day, which affects how we spend our time, and what we do, think, and feel.

In this sense, software can be thought of as a new kind of medicine, but unlike medicine in capsule form that acts on a single human body, software is a different kind of medicine that acts on the behavioral patterns of entire societies.

The designers of this software call themselves “software engineers”, but they are really more like social engineers.

Through their inventions, they alter the behavior of millions of people, yet very few of them realize that this is what they are doing, and even fewer consider the ethical implications of that kind of power.

Oof this whole demise of the web is such a doozy. I don’t think it necessarily has much to do with the technology, though (though the web technology isn’t really the greatest to begin with). But the web is and has always been a social phenomenon — that’s why people got computers! They wanted computers to get on the web, and they wanted to get on the web because everyone else was getting on the web! It was novel and it was exciting and it was entertaining. It brought the world to your desk — emails, newses, games.

Computers before the web were basically relegated to whatever software you bought (from a store, usually). They could do lots of things, but it was very much a “one at a time” thing. The web comes then suddenly you’ve got endlessness in front of you. I think in many ways the web is (and has always been) a lot closer to TV than it has been to “traditional” software. The web is your “there’s 400 channels and nothin’s on” situation, except there are infinite channels and still nothing is on.

And so I think it’s really like TV because it encourages almost twitchy behaviour, via links. “Hey what’s this? and this? and that? and these?” And it’s very entertaining and stimulating (or at least minimum viable stimulation). Much like TV where when one show ends, another comes on right after it, so too is the web neverending.

And all of that is to say nothing of the content of the web! which I think has become more and more like TV lately. The links-as-channel-changers really caught on and so every website fights to keep you around. Also as bandwidth increases, photos and video become easier to use, and so now it’s easier to look at pictures and, much like TV, be entertained by video. And now you can have many videos playing at once in a feed stream and O the humanity the people making the stuff don’t really care very much so long as you’re watching it and whatever way they’re making money off it.

Reality TV and social networks feel very contemporary to me. Facebook (and really any social network) looks quite a lot like reality tv if you want it to look like reality tv. “Haha, I know Jersey Shore is stupid, I just watch it to make fun of it,” is even more compelling when it’s your idiot cousin on Facebook. It’s The Truman Show except you’re the one starring in it and everybody else you know is also starring in it and the door at the edge of the ocean is actually open but you can’t walk through it because you know you can’t watch once you’re outside. And but so now that everybody’s in this reality tv show nobody cares where the set is, if it’s on the web or if it’s on an app it makes no difference to them. They don’t see it as a democracy versus a dictatorship, they see it as VHS versus a DVD.

The good news is anyone (with enough money) can remake the web. The bad news is almost nobody cares or wants to.

The film industry’s problems with representing people of color in mainstream movies are well known and documented, but it can still be shocking when someone finds a way to display them in a new and meaningful way. Cue actor Dylan Marron—known to podcasting fans as Carlos the scientist from the popular Welcome To Night Vale—who recently began posting videos on his YouTube channel of every line spoken by people of color in various popular films. And, surprise, surprise, it turns out that they’re not very long videos at all.

Rails, a popular Ruby-based web framework, was born in 2005. It billed itself as an “opinionated” framework; creator David Heinemeier Hansson likes to characterize Rails as “omakase,” his culturally appropriative way of saying his technical decisions are better than anyone else’s.

Rails got its foothold by being the little outsider that stood against enterprise Java’s vast monoliths and excesses: programming excesses, workflow excesses, and certainly its excesses of corporate politesse. In two representative 2009 pieces, DHH described himself as a “R-Rated Individual,” who believed innovation required “a few steps over [the line].” The line, in this case, was softcore pornography presented in a talk of Matt Aimonetti’s; Aimonetti did not adequately warn for sexual content, and was largely supported in his mistakes by the broader community. Many women in Ruby continue to view the talk’s fallout as a jarring, symbolic wound. […]

Technical affiliations, as Yang and Rabkin point out, are often determined by cultural signaling as much or more than technical evaluation. When Rails programmers fled enterprise Java, they weren’t only fleeing AbstractBeanFactoryCommandGenerators, the Kingdom of Nouns. They were also fleeing HR departments, “political correctness,” structure, process, heterogeneity. The growing veneer of uncool. Certainly Rails’ early marketing was more anti-enterprise, and against how Java was often used, than it was anti-Java — while Java is more verbose, the two languages are simply not that different. Rails couldn’t sell itself as not object-oriented; it was written in an OO language. Instead, it sold itself as better able to leverage OO. While that achievement sounds technical on the surface, Rails’ focus on development speed and its attacks on enterprise architects’ toys were fundamentally attacks on the social structures of enterprise software development. It was selling an escape from an undesirable culture; its technical advantages existed to enable that escape.

It’s easy to read these quotes and look at it as an isolated example, but it’s not. It’s evidence of larger systems and structures at play, and Rails is but one example of that.

More and more I realize our technology doesn’t exist in a vacuum. Who creates and and who benefits from it usually depend on large social systems, and we need to be mindful of them and work to stop them from marginalizing groups.

I don’t have a problem with trying to reinvent the wheel for its own sake, just to see if I can make a better wheel. But that’s not what we were doing. We were reinventing the wheel when the goal was to build a car, and the existing wheel was just too round or not round enough, and while we’re at it, let’s rethink that whole windshield idea, and I don’t know what a carburetor is so we probably don’t need it, and wow this is taking a long time so maybe we should hire a hundred more people? You don’t even get the satisfaction of tinkering with the wheel, because the car is so far behind schedule that your wheel will be considered finished as soon as it rolls well enough.

Yesterday Brent Simmons wrote about how unsustainable it is to be an indie iOS developer:

Yes, there are strategies for making a living, and nobody’s entitled to anything. But it’s also true that the economics of a thing may be generally favorable or generally unfavorable — and the iOS App Store is, to understate the case, generally unfavorable. Indies don’t have a fighting chance.

And that we might be better off doing indie iOS as a labour of love instead of a sustainable business:

Write the apps you want to write in your free time and out of love for the platform and for those specific apps. Take risks. Make those apps interesting and different. Don’t play it safe. If you’re not expecting money, you have nothing to lose.

He suggests one reason for the unsustainability in the App Store is the fact that it’s crowded, that it’s too easy to make apps:

You might think that development needs to get easier in order to make writing apps economically viable. The problem is that we’ve seen what happens when development gets easier — we get a million apps on the iOS App Store. The easier development gets, the more apps we see.

However, when expressing frustration with the current economics of the App Store, we need to consider the effect of this mass supply of enthusiastic, creative developers. As it gets ever easier to write apps, and we’re more able to express our creativity by building apps, the market suffers more from the economic problems of other creative fields.

I think this argument misses the problem and misses it in a big way. It has little to do with how many people are making apps (in business, this is known as “competition” and it’s an OK thing!). The problem is that people aren’t paying for apps because people don’t value apps, generally speaking (I’ve written about this before, too).

You might ask, “why don’t they value apps?” but I think if you turn the question around to “why aren’t apps valuable to people?” it becomes a little easier to see the problem. Is it really so unbelievable the app you’re trying to sell for 99 cents doesn’t provide that much value to your customers? Your customers don’t care about making things up in volume, they don’t care about the reach of the app store, they only care about the value of your software.

(Brief interlude to say that of course, “value” is not the only (or even necessarily, the most important) factor when it comes to a capitalist market. We’re taught that customers decide solely based on value, but they are obviously continuously manipulated by many factors, including advertising, social pressures and structures, etc. But I’m going to give you the benefit of the doubt that you actually care about creating value for people and run with that assumption).

I want to be clear I’m not suggesting price determines value (it may influence perceived value, but it doesn’t determine it entirely), but I’m saying if you’re pricing your app at 99 cents, you’re probably doing so because your app doesn’t provide very much value. Taking a 99 cent app and pricing it at 20$ probably isn’t going to significantly increase its value in the eyes of your customers.

What I am saying is if you want a sustainable business, you’ve got to provide value for people. Your app needs to be worth people paying enough money that you can keep your lights on. Omni can get away with charging 50$ for their iPhone apps because people who use it think it’s worth that price. Something like OmniFocus isn’t comparable to a 99 cent app — it’s much more sophisticated and simply does more than most apps.

Value doesn’t have to come from having loads of features, but they might help get you there. Most people probably wouldn’t say “Github is worth paying for because it has a ton of features” but they might say “Github is worth paying for because I couldn’t imagine writing software without it.”

There’s something unsettling about the word “real” when used in phrases like “real job” or “real adult” or “real programming language,” and although I think we often use it without bad intent, I think it often ends up harming and belittling people on the receiving end.

Saying “real something” is implicitly saying other things aren’t real enough or aren’t in some way valid. We often associate this with a professional job.

It’s kind of demeaning to say a blogger isn’t a “real writer.” What’s often meant instead is the blogger is not a professional or paid writer, but that doesn’t mean what they write is any less real.

The good news is if you write words then you are a real writer!

As someone who has worked on programming languages for learning at both Hopscotch and Khan Academy, I’ve heard the term “real programming” more than I’d like to admit.

Never in my life have I heard so many paid, professional programmers demean a style of programming so frequently as they do programming languages for learning. I’ve even been told, vehemently so, by a cofounder of a learn-to-Javascript startup “we don’t believe in teaching toy languages, we only want to teach people real programming.”

What is a real programming language anyway? I think if something is meant to be educational it’s often immediately dismissed by many programmers. They’ll often say it’s not a real language because you can’t write “real” programs in it, where “real” typically means “you type code” or “you can write an operating system in it.”

The good news is if your program is Turing complete then it is real programming!

Our history has shown time and time again the things we don’t consider “real” usually become legitimized in good time. Why do we exclude people and keep our minds so narrow to the things we love?

In a randomized trial, published in The New England Journal of Medicine, first-year students at three Canadian campuses attended sessions on assessing risk, learning self-defense and defining personal sexual boundaries. The students were surveyed a year after they completed the intervention.

The risk of rape for 451 women randomly assigned to the program was about 5 percent, compared with nearly 10 percent among 442 women in a control group who were given brochures and a brief information session. […]

Other researchers praised the trial as one of the largest and most promising efforts in a field pocked by equivocal or dismal results. But some took issue with the philosophy underlying the program’s focus: training women who could potentially be victims, rather than dealing with the behavior and attitudes of men who could potentially be perpetrators.

We’re living in something of a golden age of awareness-raising. Cigarette labels relay dire facts about the substances contained within. Billboards and PSAs and YouTube videos highlight the dangers of fat and bullying and texting while driving. Hashtag activism, the newest awareness-raising technique, abounds: After the La Isla Vista shootings, many women used the #YesAllWomen hashtag to relate their experiences with misogyny; and a couple months before that, #CancelColbert brought viral attention to some people’s anger with Stephen Colbert over what they saw as a racist joke. Never before has raising awareness about various dangers and struggles been such a visible part of everyday life and conversation.

But the funny part about all of this awareness-raising is that it doesn’t accomplish all that much. The underlying assumption of so many attempts to influence people’s behavior — that they make bad choices because they lack the information to empower them to do otherwise — is, except in a few cases, false. And what’s worse, awareness-raising done in the wrong way can actually backfire, encouraging the negative activities in question. One of the favorite pastimes of a certain brand of concerned progressive, then, may be much more effective at making them feel good about themselves than actually improving the world in any substantive way.

In my native land of Canada, the term “Engineer” is a protected term, like “Judge” or “Doctor” – I can’t just go out and claim to be a software engineer. I am a (lowly) software developer.

What separates engineering from developing?

In my opinion, discipline. […]

Software developers make code. Software engineers make products.

I would actually contend the opposite. I would say software engineers make the code and a software developers make the product. I’d also like to point out I’m not trying to make a value judgement on either part of the job, just the distinction.

Developers are those developing the product. To me that includes things like design, resources, planning, etc. A software engineer (in the non-protected word sense) is somebody specifically specializing in the act of building the code. This includes things like designing systems and “architecture,” but their primary area of focus is eventually implementing those systems in code. In this view, I see a software engineer as belonging to the set of software developers.

This is why at WWDC we see talks about design, prototyping, audio, and accessibility. These are roles in the realm of software development right alongside engineering as well.

Will developers waste months of their time developing tiny widgets for every imaginable kind of app? Are they making watch versions of iPhone apps that really should have just been web pages in the first place? Will the widgets show almost no information thanks to the tiny screen size and the immutable laws of physics?

Will the SDK be buggy during the betas? Will compile times be slow due to Swift? Will the betas goad developers into filing thousands of Radars that Apple developers will never fix because the Apple Watch is a distraction for Apple’s developers in addition to seemingly every 3rd party developer as well?

Will I finally be able to connect and share moments with the ones I love all from the comfort of my own watch? Will more notifications buzzing on my arm finally make me feel important like I’ve always dreamed of? Will it at least get me more followers on Twitter? Jesus where is my Uber?

Will the Native Apple Watch SDK improve in any significant way computing for a large number of people? Will a luxury timekeeping computing device bring us together or drive us apart? Will a native SDK improve or harm that?

Will it help us understand complex problems? Will it help us devise solutions to these problems?

Will the SDK help me realize the destructive tendencies of a capitalist lifestyle? Will the SDK make developers want to buy a new Apple Watch every year because all these native apps slow their watches down and because well they have two arms anyway so what’s the harm in buying another? Will I think about the people living elsewhere in the world who manufactured the watch? Will I think about where “away” is when I throw the watch away? Will I think about how WWDC is celebrating me for changing the world despite my immense privilege enabling me to become a professional software developer and live in a celebrated bubble because me and people like me are like, real good at helping Apple sell more watches and iPhones?

This week during Google I/O, we were given glimpses of some of the company’s ATAP projects. The two projects, both accompanied by short videos, focus on new methods of physical interaction.

Jacquard (video) is “a new system for weaving technology into fabric, transforming everyday objects, like clothes, into interactive surfaces.” This allows clothing to effectively become a multitouch surface, presumably to control nearby computers like smartphones or televisions.

Soli (video) is “a new interaction sensor using radar technology. The sensor can track sub-millimeter motions at high speed and accuracy. It fits onto a chip, can be produced at scale and built into small devices and everyday objects.” The chip recognizes small gestures made with your fingers or hands.

Let’s assume the technology shown in each demo works really well, which is certainly possible given Google’s track record for attracting incredible technical talent. It seems very clear to me Google has no idea what to do with these technologies, or if they do, they’re not saying. The Soli demo has people tapping interface buttons in the air and the Jacquard demo has people multitouching their clothes to scroll or make a phone call. Jacquard project founder Ivan Poupyrev even says it “is a blank canvas and we’re really excited to see what designers and developers will do with it.”

This is impressive technology and an important hardware step towards the future of interaction, but we’re getting absolutely no guidance on what this new kind of interaction should actually be, or why we’d use it. And the best we’re shown is a poor imitation of old computer interfaces. We’re implicitly being told existing computer interfaces are definitively the way we should manipulate the digital medium. We’re making an assumption and acting as if it were true without actually questioning it.

Emulating a button press or a slider scroll is not only disappointing but it’s also a step backwards. When we lose the direct connection with the device graphics being manipulated, the interaction becomes a weird remote control with no remote to tell us we’ve even made a click. This technology is useless if all we do with it is poorly emulate our existing steampunk interfaces of buttons and knobs and levers and sliders.

If you want inspiration for truly better human computer interfaces, I highly suggest checking out non-digital artists and craftspeople and their tools. Look at how a painter or an illustrator works. What does their environment look like? What tools do they have and how do they use them? How do they move their tools and what is the outcome of that? How much freedom do their tools afford them?

Look to musicians to see an expressive harmony between player and instrument. Look at the range of sound, volume, tempo a single person and single instrument can make. Look at how the hands and fingers are used, how the mouth and lungs are used, how the eyes are used. Look at how the instruments are positioned relative to the player’s body and relative to other players.

Look at how a dancer moves their body. Look at how every bone and muscle and joint is a possible degree of freedom. Look at how precise the movement can be controlled, how many possible poses and formations within a space. Look at how dancers interplay with each other, with the space, with the music, and with the audience.

And then look at the future being sold to you. Look at your hand outstretched in front of a smartphone screen lying on a table. Look at your finger and thumb clicking a pretend button to dismiss a dialog box. Look at your finger gliding over your sleeve to fast-forward a movie you’re watching on Netflix.

Is this the future you want? Do you want to twiddle your thumbs or do you want to dance with somebody?

According to data from comScore, for example, the overall time spent online with desktop devices in the U.S. has remained relatively stable for the past two years. Time spent with mobile devices has grown rapidly in that time, but the numbers suggest mobile use is adding to desktop use, not subtracting from it.

Another thing I wanted to mention about thinking about the future is that: it’s complicated. When it comes to the future we rarely really have the future; instead what we have are a bunch of different parts, like people or places or things or circumstances, etc., all relating and working together in a complex way. From this lens, “the future” really represents the state of a big ol’ system of everything we know. Lovely, isn’t it?

So if you want to “invent the future” you have to understand your actions, inventions, etc. do not exist in a vacuum (unless you work at Dyson), but instead will have an interplay with everything.

And this helps direct a lot of what’s possible without really great efforts. If I wanted to make my own smartphone, with the intention of making the number one smartphone in the world, I can’t really do that. Today, I can’t make a better smartphone than Apple. There are really only a few major players who can play in this game with a possibility of being the best.

I’m not trying to be defeatist here, but I am trying to point out you can’t just invent the future you want in all cases. The trick is to be aware of the kind of future you want. You can’t win at the smartphone game today. Apple itself couldn’t even be founded today. If you’re trying to compete with things like these then you have to realize you’re not really inventing the future but you’re inventing the present instead.

Be mindful of the systems that exist, or else you’ll be inventing tomorrow’s yesterday.

I’ve heard the possibly apocryphal advice that a person’s twenties are very important years in their life because those are the years when you effectively get “set in your ways,” when you develop the habits you’ll be living with for the rest of your life. As I’m a person who’s about to turn 27 whole years old, this has been on my mind lately. I’ve seen lots of people in their older years who are clearly set in their ways or their beliefs, who have no real inclination to change. What works for them will always continue to work for them.

If this is an inevitable part of aging, then I want to set myself in the best ways, with the best habits to carry me forward. Most of these habits will be personal, like taking good care of my health, keeping my body in good shape, and taking good care of my mind. But I think the most important habit for me is to always keep my mind learning, always changing and open to new ideas. That’s the most important habit I think I can develop over the next few years.

Keeping with this theme, I want to keep my mind open with technology as well. Already am I feeling it’s too easy to say “No, I’m not interested in that topic because how I do things works for me already,” mostly when it comes to technical things (I am loathe to use Auto Layout or write software for the Apple Watch). I don’t like to update my operating systems until I can no longer use any apps, because what I use is often less buggy than the newest release.

These habits I’m less concerned about because newer OS releases and newer APIs in many ways seem like a step backwards to me (although I realize this might just be a sign of me already set in my ways!). I’m more concerned about the way I perceive software in a more abstract way.

To me, “software” has always been something that runs as an app or a website, on a computer with a keyboard and mouse. As a longtime user of and developer for smartphones, I know software runs quite well on these devices as well, but it always feels subpar to me. In my mind, it’s hard to do “serious” kinds of work. I know iPhones and iPads can be used for creation and “serious” work, but I also know doing the same tasks typically done on a desktop are much more arduous on a touch screen.

Logically, I know this is a dead end. I know many people are growing up with smartphones as their only computer. I know desktops will seem ancient to them. I know in many countries, desktop computers are almost non-existent. I know there are people writing 3000 word school essays and I know these sorts of things will only increase over time. But it defies my common sense.

As long as I hold on to my beliefs that software exists as an app or website on a device with a keyboard and mouse, I’m doomed to living in a world left behind.

I’ve seen it happen to people I respect, too. I love the concept of Smalltalk (and I’ll make smalltalk about Smalltalk to anyone who’ll listen) but I can’t help but feel it’s a technological ideal for a world that no longer exists. In some ways, it feels like we’ve missed the boat on using a computer as a powerful means of expression instead what we got is a convenient means of entertainment.

My point isn’t really about any particular trend. My point is to remind myself that what “software” is is probably always going to remain in flux, tightly related to things like social change or the way of the markets. Software evolves and changes over time, but that evolution doesn’t necessarily line up with progress, it’s just different.

Alan Kay said the best way to predict the future is to invent it. But I think you need to understand that the future’s going to be different, first.

As you may know, when you send a message from the Messenger app there is an option to send your location with it. What I realized was that almost every other message in my chats had a location attached to it, so I decided to have some fun with this data. I wrote a Chrome extension for the Facebook Messenger page (https://www.facebook.com/messages/) that scrapes all this location data and plots it on a map. You can get this extension here and play around with it on your message data. […]

This means that if a few people who I am chatting with separately collude and send each other the locations I share with them, they would be able to track me very accurately without me ever knowing.

Even if you know how invasive Facebook is, this still seems shocking. Why?

These sorts of things always seem shocking because we don’t usually see them in the aggregate. Sending your location one message at a time seems OK, but it’s not until you see all the data together that it becomes scary.

We’re used to seeing data moment to moment, as if looking through a pinhole. We’re oblivious to the larger systems at work, and it’s not until we step up the ladder of abstraction do we start to see a bigger picture.

Say what you will of the privacy issues inherent in this discovery, but I think the bigger problem is our collective inability to understand and defend ourselves from these sorts of systems.

Smalltalk Reading List Poster [pdf]. I recently made this comic / poster of a brief history of Smalltalk, its influences, and the influence it has had on programming. It’s quite large and should be suitable for printing.

What is less apparent, perhaps, is that the will to abandon the public way is not some failure of understanding, or some nearsighted omission by shortsighted politicians. It is part of a coherent ideological project. As I wrote a few years ago, in a piece on the literature of American declinism, “The reason we don’t have beautiful new airports and efficient bullet trains is not that we have inadvertently stumbled upon stumbling blocks; it’s that there are considerable numbers of Americans for whom these things are simply symbols of a feared central government, and who would, when they travel, rather sweat in squalor than surrender the money to build a better terminal.” The ideological rigor of this idea, as absolute in its way as the ancient Soviet conviction that any entering wedge of free enterprise would lead to the destruction of the Soviet state, is as instructive as it is astonishing. And it is part of the folly of American “centrism” not to recognize that the failure to run trains where we need them is made from conviction, not from ignorance.

Yeah, what good has the American government ever done for America?

The Web and HTTP. John Gruber repeating his argument that because mobile apps usually use HTTP, that’s still the web:

I’ve been making this point for years, but it remains highly controversial. HTML/CSS/JavaScript rendered in a web browser — that part of the web has peaked. Running servers and client apps that speak HTTP(S) — that part of the web continues to grow and thrive.

But I call bullshit. HTTP is not is not what gives the web its webiness. Sure, it’s a part of the web stack, but so is TCP/IP. The web could have been implemented over any number of protocols and it wouldn’t have made a big difference.

What makes the web the web is the open connections between documents or “apps,” the fact that anybody can participate on a mostly-agreed-upon playing field. Things like Facebook Instant Articles or even Apple’s App Store are closed up, do not allow participation by every person or every idea, and don’t really act like a “web” at all. And they could have easily been built on FTP or somesuch and it wouldn’t make a lick of difference.

It may well be the “browser web” John talks about has peaked, but I think it’s incorrect to say the web is still growing because apps are using HTTP.

One of the ways this theory was first established is through rat experiments – ones that were injected into the American psyche in the 1980s, in a famous advert by the Partnership for a Drug-Free America. You may remember it. The experiment is simple. Put a rat in a cage, alone, with two water bottles. One is just water. The other is water laced with heroin or cocaine. Almost every time you run this experiment, the rat will become obsessed with the drugged water, and keep coming back for more and more, until it kills itself.

The advert explains: “Only one drug is so addictive, nine out of ten laboratory rats will use it. And use it. And use it. Until dead. It’s called cocaine. And it can do the same thing to you.”

But in the 1970s, a professor of Psychology in Vancouver called Bruce Alexander noticed something odd about this experiment. The rat is put in the cage all alone. It has nothing to do but take the drugs. What would happen, he wondered, if we tried this differently? So Professor Alexander built Rat Park. It is a lush cage where the rats would have colored balls and the best rat-food and tunnels to scamper down and plenty of friends: everything a rat about town could want. What, Alexander wanted to know, will happen then?

Consider Brooklyn, our best guess for where you might be reading this article. (Feel free to change to another place by selecting a new county on the map or using the search boxes throughout this page.)

The page guesses where you live (at least within America, I don’t know about the rest of the world) and updates its content dynamically to reflect that. The article is what’s best for the reader.

BuzzFeed is a successful company. And it is not only that: BuzzFeed is the rare example of a news organization that changes the way the news industry works. While it may not turn the largest profits or get the biggest scoops, it is shaping how other organizations sell ads, hire employees, and approach their work. BuzzFeed is the most influential news organization in America today because the Internet is the most influential medium—and, in some crucial ways, BuzzFeed demonstrates an understanding of that medium better than anyone else. […]

Time’s success sprang from a content innovation matched with a keen bet on demography. Its target audience was the average Fitzgerald protagonist, or, at least, his classmate. “No publication has adapted itself to the time which busy men are able to spend on simply keeping informed,” wrote the magazine’s two founders in a manifesto. It was for this audience, too, that the magazine mixed its reports on global affairs with briefs on culture, fashion, business, and politics. The overall feel of a Time issue was a feeling of omniscience: “Now, you, young man of industry, know it all.”

All the toxicity between the gender divide? It starts here. It starts when they’re kids. It begins when you say, “LOOK, THERE’S THE GIRL STUFF FOR THE GIRLS OVER THERE, AND THE BOY STUFF FOR THE BOYS OVER HERE.” And then you hand them their pink hairbrushes and blue guns and you tell your sons, “You can’t play with the pink hairbrush because GIRL GERMS yucky ew you’re not weird are you, those germs might make you a girl,” and then when the boy wants to play with the hairbrush anyway, he does and gets his ass kicked on the bus and gets called names like sissy or pussy or some homophobic epithet because parents told their kids that girl stuff is for girls only, which basically makes the boy a girl. And the parents got that lesson from the companies that made the hairbrush because nowhere on the packaging would it ever show a boy brushing hair or a girl brushing a boy’s hair. And on the packaging of that blue gun is boys, boys, boys, grr, men, war, no way would girls touch this stuff. Duh! Girls aren’t boys! No guns for you. […]

Now, this runs the risk of sounding like the plaintive wails of a MAN SPURNED, wherein I weep into the open air, “WHAT ABOUT ME, WHAT ABOUT US POOR MENS,” and that’s not my point, I swear. I don’t want DC or the toy companies to cater to my boy. I just don’t want him excluded from learning about and dealing with girls. I want society to expect him to actually learn about girls and be allowed to like them — not as romantic targets later in life, but as like, awesome ass-kicking complicated equals. As real people who are among him rather than separate from him.

This was delivered to me in the standard message format, no different than a New York Times alert informing me a building two blocks from my apartment has exploded, or an iChat message that my sister is desperately trying to reach me. Please note that I am not a blood relative of B.J. — sorry, Melvin — Upton, nor am I even a fan of the Atlanta Braves. In other words…this could have waited. Nonetheless, MLB.com At Bat apparently deemed this important enough to broadcast to hundreds of thousand of users who had earlier clicked, with hardly a second thought, on a dialogue box asking if they wanted to receive notifications from Major League Baseball. No matter what these users were doing — enduring a meeting, playing basketball, presenting to a book club, daydreaming, watching a movie, enjoying a family meal, painting their masterpiece, proposing marriage, interviewing a job candidate, having sex, or any combination thereof — the news of The Melvin Renaming (the next Robert Ludlum novel?) penetrated their individual radars, urging them to Look at me! Now! Even if they kept the phone stashed, the simple fact that there was an alert burrowed in their brains, keeping them just a little off balance until they finally picked up the phone to discover what the buzz was about.

The Melvin Renaming was just one interruption among billions in what now is unquestionably the Age of Notifications. As our reliance on electronically delivered information has increased, the cascade of brief urgent pointers to that information has been funneled into our devices, lighting our lock screens with these brief dispatches. Rarely does an app neglect to ask you to opt-in to these messages. Most often — since you see the dialogue box when you are entering your honeymoon stage with the app, just after consummation — you say yes. […]

So what’s the solution? We need a great artificial intelligence effort to comb through our information, assess the urgency and relevance, and use a deep knowledge of who we are and what we think is important to deliver the right notifications at the right time. As time goes on, we will trust such a system to effectively filter all our information and dole it out just as needed.

Gruber adds:

I think he’s on to something here: some sort of AI for filtering notification does seem useful. I can imagine helping it by being able to give (a) a thumbs-down to a notification that went through to your watch that you didn’t want to see there; and (b) a thumbs-up to a notification on your phone or PC that wasn’t filtered through to your more personal devices but which you wish had been.

But: this sounds too much like spam filtering to me. True spam is unasked-for. Notifications are all things for which you explicitly opted in, and can opt out of at any moment.

First of all, I think it sounds effectively like spam filtering because these notifications are effectively like spam. Although we technically opt in to them, we’re often coerced into doing so. As Levy said in the quoted passage, we’re often asked at a time when we’re feeling good about the app (after first downloading it, or after accomplishing a task; yes, developers opportunistically pop these up to get more people to agree to them). App developers know when is best to get you to agree, and they know notifications are an effective communication channel for “engaging” (i.e., advertising to) you.

These notifications are kind of like junk food. They’re delicious but dangerous. A little bit is fine, but too much is bad for you. While you can say junk food junkies are “opting in” to eating the unhealthy food, are they really making a choice? Or is the food literally irresistible to them?

Secondly, if this recent interview in Wired is to be believed, a deluge of notifications is one of the primary motivations for the development of the Apple Watch. Am I expected to pay $350+ in order to cut the annoyances of my $600+ iPhone? Wouldn’t it just be simpler to turn off the notifications (i.e., all of them) instead of throwing more technology on the problem?

We shouldn’t have to force (or shame) people into some false sense of virtuosity (“she’s so extreme, she doesn’t allow any notifications!”) just so they’re not constantly disturbed by buzzes and animating notifications.

It seems very likely Apple’s Force Touch technology (with its sister Taptic feedback engine) will come to a future iPhone, possibly whichever iPhone launches in the Fall of 2015. Like the recently launched MacBooks, the new iPhone will probably include APIs for your apps to take advantage of.

I’m imploring you to start thinking right now, today, about how you’re going to use these APIs in your applications.

So it goes

When Apple adds a new system-wide API to iOS, here’s how it usually goes: everybody adds some minor feature to their app thoughtlessly using the new API and the API becomes overused or misused.

Let’s look at Notifications. There are so many apps using notifications that shouldn’t be. Apps notify you about likes and comments. Apps notify you about downloads starting and downloads ending. Apps beg you to come back and use them more. Notifications, which were intended to notify you about important things instead have become a way for apps to shamelessly advertise themselves at their own whim.

Let’s look at a less nefarious feature: Sharing. Apple introduced the “Sharing” features in iOS 7: a common interface for sharing app content to social networks. This feature is used everywhere. Your browser has it, your social apps have it, your games have it, your programming environments have it.

Another example, let’s look at Air Drop: a feature designed to shared data between devices. This feature is used in all kinds of apps it shouldn’t be, like the New York Times app. How many apps have Today extensions? How many badge their icons? How many ask for your location or show a map?

The point of the above examples isn’t to argue the moral validity of their API use, but instead that these APIs are introduced by Apple, then app developers scramble to find ways to use these features in their apps, whether or not it really makes sense to do so. App developers may occasionally do so because it’s an important feature for their application, but often it seems developers use the APIs because Apple is more likely to promote apps using them or because the developers just think it’s neato.

This is something I’d like to avoid with Force touch APIs.

Force Touch

If we look to Apple for examples on how to use Force Touch in our applications, their usage has been pretty tame and uninspired so far. Most uses on their Force Touch page for the MacBook use Force Touch as a way of bringing up a contextual menu or view. For the “Force Click” feature, Apple describes features like:

looking up the definition of a word, previewing a file in the Finder, or creating a new Calendar event when you Force click a date in the text of an email.

You can do better in your apps. One way to think about force click is to think of it as an analogy for hovering on desktop computers (if I had my druthers, we’d use today’s “touch” as a hover gesture and we’d use force click as the “tap” or action gesture). Force click and hover are a little different, of course, and it’s your job to pay attention to these differences. Force click is less about skimming and more about confirming (again, my druthers and touch states!). How can your applications more powerfully let people explore and see information?

I wouldn’t look at hover functionality and just literally translate it using force click, but I would look at the kinds of interactions both can afford you. Hover can show tooltips, sure, but it can also be an ambient way to graze information. Look at how one skims an album in iPhoto (RIP) to see its photos at a glance. Look at how hovering over any data point in this visualization highlights related data (the data itself isn’t important, it’s to illustrate a usage of hover).

Pressure sensitivity as an input mechanism is a little more straightforward. You’ll presumably get continuous input in the range of 0 to 1 telling you how hard a finger is pressed and you react accordingly. Apple gives the example of varying pen thickness, but what else can you do? I’d recommend looking to video games for inspiration as they’ve been using this form of analog input for decades. Any game using a joystick or pressable shoulder triggers is a good place to start. Think about continuous things (pan gestures, sure, but also how your whole body moves, how you breathe, how you live) and things with a range (temperature, size, scale, sentiment, and, well, pressure). How can you use these in tandem with the aforementioned “hovering” scenarios?

If you want to get a head start on prototyping interactions, you can cheat by either programming on one of the new MacBooks, or you can use a new iOS 8 API on UITouch called majorRadius. This gives you an approximation of how “big” a touch was, which you can use as a rough estimate of “how hard” a finger was pressed (this probably isn’t reliable enough to ship an app with, but you can likely get a somewhat rough sense of how your interactions could work in a true pressure-sensitive environment).

Not every app probably needs Force touch or click, but that probably won’t stop people from abusing it in Twitter and photo sharing apps. If you really care about properly using these new forms of interaction, then start thinking about how to do it right, today. There is decades-worth of research and papers about this topic. Think about why hands are important. Read, think, design, and prototype. These devices are probably coming sooner than we think, so we should start thinking now on how to set a high bar for future interaction. Don’t relegate this feature to thoughtless context menus, use it as a way to add more discrete and explorable control to the information in your software.

The algorithms for facial recognition are getting better every day too. In another recent news story we heard how Neil Stammer, a juggler who had been on the run for 14 years, was finally caught. How did they catch him? An agent who was testing a piece of software for detecting passport fraud, decided to try his luck by using the facial recognition module of the software on the FBI’s collection of ‘Wanted’ posters. Neil’s picture matched the passport photo of somebody with a different name. That’s how they found Neil, who had been living as an English teacher in Nepal for many years. Apparently the algorithm has no problems matching a 14 year old picture with a picture taken today. Although it is great that they’ve managed to arrest somebody who is suspected of child abuse, it is worrying that it doesn’t seem like there are any safeguards making sure that a random American agent can’t use the database of pictures of suspects to test a piece of software.

Knowing this, it should come as no surprise that we have learned from the Snowden leaks that the National Security Agency (NSA) stores pictures at a massive scale and tries to find faces inside of them. Their ‘Wellspring’ program checks emails and other pieces of communication and shows them when it thinks there is a passport photo inside of them. One of the technologies the NSA uses for this feat is made by Pittsburgh Pattern Recognition (‘PittPatt’), now owned by Google. We underestimate how much a company like Google is already part of the military industrial complex. I therefore can’t resist showing a new piece of Google technology: the military robot ‘WildCat’ made by Boston Dynamics which was bought by Google in December 2013 […]

It is not only the government who is following us and trying to influence our behavior. In fact, it is the standard business model of the Internet. Our behaviour on the Internet is nearly always mediated by a third party. Facebook and WhatsApp sit between you and your best friend, Spotify sits between you and Beyoncé, Netflix sits between you and Breaking Bad and Amazon sits between you and however many Shades of Grey. The biggest commercial intermediary is Google who by now decides, among other things how I walk from the station to the theatre, in which way I will treat the symptoms of my cold, whether an email I’ve sent to somebody else should be marked as spam, where best I can book a hotel, and whether or not I have an appointment next week Thursday. […]

The casinos were the first industry to embrace the use of AEDs (automatic defibrillators). Before they started using them, the ambulance staff was usually too late whenever somebody had a heart attack: they are only allowed to use the back entrance (who will enter a casino when there is an ambulance in front of the door?) and casinos are purposefully designed so that you easily lose your way. Dow Schüll describes how she is with a salesperson for AEDs looking at a video of somebody getting a heart attack behind a slot machine. The man falls off his stool onto the person sitting next to him. That person leans to the side a little so that the man can continue his way to the ground and plays on. While the security staff is sounding the alarm and starts working the AED, there is literally nobody who is looking up from their gambling machine, everybody just continues playing.

This sort of reminds me of the feeling I often have when people around me are busy with Facebook on their phone. The feeling that it makes no difference what I do to get the person’s attention, that all of their attention is captured by Facebook. We shouldn’t be surprised by that. Facebook is very much like a virtual casino abusing the same cognitive weaknesses as the real casinos. The Facebook user is seen as an ‘asset’ of which the ‘time on service’ has to be made a long as possible, so that the ‘user productivity’ is as high as possible. Facebook is a machine that seduces you to keep clicking on the ‘like’ button.

It’s not just Facebook, either. I feel that way about almost all smartphone apps and social networks, too. Your attention is the currency.

This weekend, a man wearing a skull mask posted a video on YouTube outlining his plans to murder me. I know his real name. I documented it and sent it to law enforcement, praying something is finally done. I have received these death threats and 43 others in the last five months.

This experience is the basis of a Law & Order episode airing Wednesday called the “Intimidation Game.” I gave in and watched the preview today. The main character appears to be an amalgamation of me, Zoe Quinn, and Anita Sarkeesian, three of the primary targets of the hate group called GamerGate.

My name is Brianna Wu. I develop video games for your phone. I lead one of the largest professional game-development teams of women in the field. Sometimes I speak out on women in tech issues. I’m doing everything I can to save my life except be silent.

The week before last, I went to court to file a restraining order against a man who calls himself “The Commander.” He made a video holding up a knife, explaining how he’ll murder me “Assassin’s Creed Style.” He wrecked his car en route to my house to “deliver justice.” In logs that leaked, he claimed to have weapons and a compatriot to do a drive-by.

Awful, disturbing stuff.

I’ll also remind you you don’t have to be making death and rape threats to be a part of sexism in tech. The hatred of women has got to stop.

See also the “top stories” at the bottom of essay. This representation of women only obsessed with their looks is pretty toxic to both men and women, too.

Yesterday I linked to J. Vincent Toups’ 2011 Duckspeak Vs Smalltalk, an essay about how far, or really how little, we’ve come since Alan Kay’s Dynabook concept, and a critique of the limitations inherent in today’s App Store style computing.

A frequent reaction to this line of thought is “we shouldn’t make everyone be a programmer just to use a computer.” In fact, after Loren Brichter shared the link on Twitter, there were many such reactions. While I absolutely agree abstractions are a good thing (e.g., you shouldn’t have to understand how electricity works in order to turn on a light), one of the problems with computers and App Stores today is we don’t even have the option of knowing how the software works even if we wanted.

But the bigger problem is what our conception of programming is today. When the Alto computer was being researched at Xerox, nobody was expecting people to program like we do today. Javascript, Objective C, and Swift (along with all the other “modern” languages today) are pitiful languages for thinking, and were designed instead for managing computer resources (Javascript, for example, was thoughtlessly cobbled together in just ten days). The reaction of “people shouldn’t have to program to use a computer” hinges on what it means to program, and what software developers think of programming is vastly different from what the researchers at Xerox had in mind.

Programming, according to Alan Kay and the gang, was a way for people to be empowered by computers. Alan correctly recognized the computer as a dynamic medium (the “dyna” in “Dynabook") and deemed it crucial people be literate with this medium. Literacy, you’ll recall, means being able to read and write in a medium, to be able to think critically and reason with a literature of great works (that’s the “book” in “Dynabook"). The App Store method of software essentially neuters the medium into a one-way consumption device. Yes, you can create on an iPad, but the system’s design language does not allow for creation of dynamic media.

Nobody is expecting people to have to program a computer in order to use it, but the PARC philosophy has at its core a symmetric concept of creation as well as consumption. Not only are all the parts of Smalltalk accessible to any person, but all the parts are live, responsive, active objects. When you need to send a live, interactive model to your colleague or your student, you sent the model, not an attachment, not a video or a picture, but the real live object. When you need to do an intricate task, you don’t use disparate “apps” and pray the developers have somehow enabled data sharing between, but you actually combine the parts yourself. That’s the inherent power in the PARC model that we’ve completely eschewed in modern operating systems.

Smalltalk and the Alto were far from perfect, and I’ll be the last to suggest we use them as is. But I will suggest we understand the philosophy and the desires to empower people with computers and use that understanding to build better systems. I’d highly recommend reading Alan’s Early History of Smalltalk and A Personal Computer for Children of All Ages to learn what the personal computer was really intended to be.

Recently, I’ve been examining the language I use in the context of determining if I may be inadvertently hurting anyone. For instance, using “insane” or “crazy” as synonyms for “unbelievable” probably doesn’t make people suffering from mental illness feel great. […]

Pretty straightforward. There are terms out there that are offensive to people who identify as members of groups that those terms describe. The terms are offensive primarily because they connote negativity beyond the meaning of the word. […]

To me, the bottom-line is that these words are hurtful and there are semantically identical synonyms to use in their place, so there is no reason to continue to use them. Using the terms is hurtful and continuing to use them when you know they’re hurtful is kind of a dick move.

Ash published this article in October 2014 and it’s been on my mind ever since. It makes total sense to me, and I’ve been trying hard to remove these words from my vocabulary. It takes time, especially considering how pervasive they are, but it’s important. If you substitute the word “magical” in for any of the bad words, it makes your sentences pretty delightful, and shows how banal the original words really are, as we over-use them anyway.

While the Dynabook was meant to be a device deeply rooted in the ethos of active education and human enhancement, the iDevices are essentially glorified entertainment and social interaction (and tracking) devices, and Apple controlled revenue stream generators for developers. The entire “App Store” model, then works to divide the world into developers and software users, whereas the Xerox PARC philosophy was for there to be a continuum between these two states. The Dynabook’s design was meant to recruit the user into the system as a fully active participant. The iDevice is meant to show you things, and to accept a limited kind of input - useful for 250 character Tweets and Facebook status updates, all without giving you the power to upset Content Creators, upon whom Apple depends for its business model. Smalltalk was created with the education of adolescents in mind - the iPad thinks of this group as a market segment. […]

It is interesting that at one point, Jobs (who could not be reached for comment [note: this was written before Jobs’ death]) described his vision of computers as “interpersonal computing,” and by that standard, his machines are a success. It is just a shame that in an effort to make interpersonal engagement over computers easy and ubiquitous, the goal of making the computer itself easily engaging has become obscured. In a world where centralized technology like Google can literally give you a good guess at any piece of human knowledge in milliseconds, its a real tragedy that the immense power of cheap, freely available computational systems remains locked behind opaque interfaces, obscure programming languages, and expensive licensing agreements.

The article is also great because it helps dispel the myth that Apple took “Xerox’s rough unfinished UI and polished it for the Mac.” It’s closer to the truth to say Apple dramatically stripped the Smalltalk interface of its functionality that resulted in a toy, albeit cheaper, personal computer.

Over the years, I’ve noticed that when I do have a specific reason to ask everyone to set aside their devices (‘Lids down’, in the parlance of my department), it’s as if someone has let fresh air into the room. The conversation brightens, and more recently, there is a sense of relief from many of the students. Multi-tasking is cognitively exhausting — when we do it by choice, being asked to stop can come as a welcome change.

So this year, I moved from recommending setting aside laptops and phones to requiring it, adding this to the class rules: “Stay focused. (No devices in class, unless the assignment requires it.)” Here’s why I finally switched from ‘allowed unless by request’ to ‘banned unless required’. […]

Worse, the designers of operating systems have every incentive to be arms dealers to the social media firms. Beeps and pings and pop-ups and icons, contemporary interfaces provide an extraordinary array of attention-getting devices, emphasis on “getting.” Humans are incapable of ignoring surprising new information in our visual field, an effect that is strongest when the visual cue is slightly above and beside the area we’re focusing on. (Does that sound like the upper-right corner of a screen near you?)

The form and content of a Facebook update may be almost irresistible, but when combined with a visual alert in your immediate peripheral vision, it is—really, actually, biologically—impossible to resist. Our visual and emotional systems are faster and more powerful than our intellect; we are given to automatic responses when either system receives stimulus, much less both. Asking a student to stay focused while she has alerts on is like asking a chess player to concentrate while rapping their knuckles with a ruler at unpredictable intervals. […]

Computers are not inherent sources of distraction — they can in fact be powerful engines of focus — but latter-day versions have been designed to be, because attention is the substance which makes the whole consumer internet go.

The fact that hardware and software is being professionally designed to distract was the first thing that made me willing to require rather than merely suggest that students not use devices in class. There are some counter-moves in the industry right now — software that takes over your screen to hide distractions, software that prevents you from logging into certain sites or using the internet at all, phones with Do Not Disturb options — but at the moment these are rear-guard actions. The industry has committed itself to an arms race for my students’ attention, and if it’s me against Facebook and Apple, I lose. […]

The “Nearby Peers” effect, though, shreds that rationale. There is no laissez-faire attitude to take when the degradation of focus is social. Allowing laptop use in class is like allowing boombox use in class — it lets each person choose whether to degrade the experience of those around them.

This weekend, I attended a small 20-person workshop on figuring out how to use interactivity to help teach concepts. I don’t mean glorified flash cards or clicking through a slideshow. I mean stuff like my 2D lighting tutorial, or this geology-simulation textbook, or this explorable explanation on explorable explanations.

Over the course of the weekend workshop, we collected a bunch of design patterns and considerations, which I’ve made crappy diagrams of, as seen below. Note: this was originally written for members of the workshop, so there’s a lot of external references that you might not get if you weren’t there.

This is great because it’s full of examples about why and when and what to make things explorable.

In ten years, when America’s health care system is still a hideous, tragic mess, Republicans will believe that this is due to the faulty premises of Democratic legislation, while Democrats will believe that the legislation was fatally weakened by obstinate Republicans. While we can of course reason our way to our own hypotheses, we will lack a truly irrefutable conclusion, the sort we now have about, say, whether the sun revolves around the earth.

Thus: a real effect of compromise is that it prevents intact ideas from being tested and falsified. Instead, ideas are blended with their antitheses into policies that are “no one’s idea of what will work,” allowing the perpetual political regurgitation, reinterpretation, and relational stasis that defines the governance of the United States.

Baker goes on to detail how compromise results in similarly poor designs and argues in favour of the auteur. Compromise can often be interpreted like racial colourblindness, where instead each voice needs to shine of its own merits.

Worse still, in my experience, compromise is often a negative feedback loop: it’s difficult to convince an organization it should stop compromising, when it can only agree to things by compromising. It’s poison in the institutional well.

Right before Christmas, Bret Victor released his newest talk, “The Humane Representation of Thought”, summing up the current state of his long reaching and far seeing visionary research. I’ve got lots of happy-but-muddled thoughts about it, but suffice it to say, I loved it. If you like his work, you’ll love this talk.

There seem to be a few unifying components of greatness. The first is willingness to work hard. No one has become great by surfing the internet. Anyone who you would consider great has most likely achieved their status through sheer hard work, not necessarily their genius. In fact, their genius probably came after the fact, as a result of their work, rather than through any latent brilliance that was lurking beneath the surface, ready to be sprung upon the world. […]

Great people don’t just sit around thinking, either; they’re creating. Without creating something, there would be no sign of their greatness. They would be another cog in the mass machine, churning ideas. They’d be addicted to brain crack. But they’re not. Great people make things: books, songs, blogonet posts, videos, programs, stories, websites. Their greatness is in their creations. Their creations show signs of the hard work and unique genius of the person making it.

I’ll add something sort of parallel to what Glen is saying here: Great people make their work better and more clearly by doing a lot of it.

I’m not particularly after greatness (it’s fine if you are), but I’ve struggled a lot in the recent years with ideas I “know” be true in my mind, but that I can’t easily express clearly either in written form or by speaking. That’s a problem, because whatever potential greatness there might be, nobody else can understand it (that, and it’s probably just actually less great because it isn’t clear, even to me, yet).

Great people who create a lot hone their skills of expressing and refining their ideas.

Textual programming languages are mostly devoid of structure — any structure that exists actually exists in our heads — and we have to many organize our code around that (modern editors offer us syntax highlighting and auto-indentation, which help, but we still have to maintain the structure mentally).

New programmers often write “spaghetti code” or don’t use variables or don’t use functions. We teach them about these features but they often don’t make the connection on why they’re useful. How many times have you seen a beginner programmer say “Why would I use that?” after you explain to him or her a new feature? Actually, how many times have you seen the opposite? I’ll bet almost never.

I have a hunch this is sort of related to why development communities often shame their members so much. There are a lot of mean spirits around. There’s a lot of “Why the hell would you ever use X?? Don’t you know about Y? Or why X is so terrible? It’s bad and you should feel bad.” We have way too much of that, and my hunch is this is (partly) related to our programming environments being entirely structureless.

When the medium you work in doesn’t actually encourage powerful ways of getting the work done, the community fills in the gaps by shaming its members into “not doing it wrong.”

I could be wrong, and even if I’m right, I’m not saying this is excusable. But I do think we’re quite well known for being a shame culture, and I think we do that in order to keep our heads from exploding. We shame until we believe, and we believe until we understand. Perhaps our environments should help us understand better in the first place, and we can leave the shaming behind.

Whatever justifications or advantages came along later – and it’s true, you do save a few processor cycles here and there and that’s nice – the reason we started using zero-indexed arrays was because it shaved a couple of processor cycles off of a program’s compilation time. Not execution time; compile time.[…]

I can tell nobody has ever actually looked this up.

Whatever programmers think about themselves and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize. We tell and retell this collection of unsourced, inaccurate stories about the nature of the world without ever doing the research ourselves, and there’s no other word for that but “mythology”. Worse, by obscuring the technical and social conditions that led humans to make these technical and social decisions, by talking about the nature of computing as we find it today as though it’s an inevitable consequence of an immutable set of physical laws, we’re effectively denying any responsibility for how we got here. And worse than that, by refusing to dig into our history and understand the social and technical motivations for those choices, by steadfastly refusing to investigate the difference between a motive and a justification, we’re disavowing any agency we might have over the shape of the future. We just keep mouthing platitudes and pretending the way things are is nobody’s fault, and the more history you learn and the more you look at the sad state of modern computing the the more pathetic and irresponsible that sounds.[…]

The second thing is how profoundly resistant to change or growth this field is, and apparently has always been. If you haven’t seen Bret Victor’s talk about The Future Of Programming as seen from 1975 you should, because it’s exactly on point. Over and over again as I’ve dredged through this stuff, I kept finding programming constructs, ideas and approaches we call part of “modern” programming if we attempt them at all, sitting abandoned in 45-year-old demo code for dead languages. And to be clear: that was always a choice. Over and over again tools meant to make it easier for humans to approach big problems are discarded in favor of tools that are easier to teach to computers, and that decision is described as an inevitability.

The tech business is proud of its workaholism, but it really shouldn’t be. It’s a sign of immaturity and poor management, not drive.

Yuuuuuuuuup.

The masculine mistake. What is so broken inside American men? Why do we make so many spaces unsafe for women? Why do we demand that they smile as we harass them - and why, when women bring the reality of their everyday experiences into the open, do we threaten to kill them for it?

If you’re a man reading this, you likely feel defensive by now. I’m not one of those guys, you might be telling yourself. Not all men are like that. But actually, what if they are? And what if men like you telling yourselves that you’re not part of the problem is itself part of the problem?

We’ve all seen the video by now. “Smile,” says the man, uncomfortably close. And then, more angrily, “Smile!”

An actress, Shoshana Roberts, spends a day walking through New York streets, surreptitiously recorded by a camera. Dozens of men accost her; they comment on her appearance and demand that she respond to their “compliments.” […]

Why do men do this? How can men walk down the same streets as women, attend the same schools, play the same games, live in the same homes, be part of the same families - yet either not realize or not care how hellish we make women’s lives?

One possible answer: Straight American masculinity is fundamentally broken. Our culture socializes young men to believe that they are entitled to sexual attention from women, and that women go about their lives with that as their primary purpose - as opposed to just being other people, with their own plans, priorities and desires.

We teach men to see women as objects, not other human beings. Their bodies are things men are entitled to: to judge, to assess, and to dispose of - in other words, to treat as pornographic playthings, to have access to and, if the women resist, to threaten, to destroy.

We raise young boys to believe that if they are not successful at receiving sexual attention from women, then they are failures as men. Bullying is merciless in our culture, and is heaped upon geeky boys by other young men in particular (and all the more so against boys who do not appear straight).

But because young men are taught to despise vulnerability, in themselves and in others, they instead turn that hatred upon those who are already more vulnerable - women and others - with added intensity. Put differently, and without in any way excusing their monstrous behavior, young men are given unrealistic expectations, taught to hate themselves when reality falls short - and then to blame women for the whole thing.

I’m reminded of this excellent and positive TED talk about a need to give boys better stories. We need more stories where “the guy doesn’t get the girl in the end, and he’s OK with that.” We need to teach boys this is a good outcome, that boys aren’t entitled to girls.

If you’re shopping for presents for boys this Christmas, I implore you to keep this in mind. Don’t buy them a story of a prince or a hero who “gets the girl.”

Modern software development isn’t all that modern. Its origins are rooted in the original Macintosh, an era and environment lacking networking, slow processors with limited memory, and almost no collaboration between developers or the processes they wrote. Today, software is developed as though these constraints still remain.

We need a modern approach to building better software. Not just incremental improvements but fundamental leaps forward. This talk presents frameworks at the sociological, conceptual, and programmatic levels to rethink how software should be made, enabling a giant leap to better software.

I haven’t been able to bring myself to watch myself talk yet. Watch it and tell me how it went?

It finally hit me: the way I felt wasn’t “people on Twitter are jerks,” it was “people are jerks on Twitter.” After this epiphany, and a brief hiatus to see if I could even break my own habits, I’ve made my decision: I’m getting off of Twitter, effective immediately. A link to this blog post will be my final tweet, and I’m only going to watch for replies until Tuesday. As part of my hiatus, I’ve already deleted all my Twitter apps from all my devices, and I’ll be scrambling my password on Tuesday to even prevent myself from logging in without going through the “forgot my password” hoops.

It’s November 23, 2014. In Brooklyn, New York, it’s getting colder as we inch closer to Winter. The leaves are still falling, but the snow isn’t. Nor is there any snow on the ground. But if you listen closely, you can hear a disturbing sound.

It’s thirty-two days until Christmas, and the grocery stores are already playing Christmas music. The streets are already decorated, and Starbucks has all the ornaments and “Holiday Flavors” out in full swing. It’s a week before American Thanksgiving.

This is Christmasmania.

Let’s look at Christmasmania for a moment. We’ve started celebrating a holiday that comes once a year thirty-two days before it’s arrived. We’ll likely celebrate it for a week after the day, too. That’s almost forty days of Christmas, every year. Let’s look at this another way.

Conservatively, let’s say we spend one month per year in Christmasmania. One month per year is one twelfth of a year. Let’s pretend we live in a land of Christmasmania where instead of spending one month of the year, one twelfth of a year devoted to the “holiday spirit”, we instead spent two hours (2/24 hours = 1/12 of a day) of every single day of the year in Christmasmania.

Every single day, between the hours of 6 and 8 PM, families don their yuletide sweaters, pour each other cups of eggnog, and listen to a few hours of Christmas carols. They’ll spend a few minutes shopping for that perfect gift, they’ll spend a few minutes wrapping it, and they’ll keep it under the tree for half an hour or so. The kids will watch Youtube clips of Rudolf and how the Grinch Stole Christmas. And maybe if they’re good, the kids will get to open a present before being sent off to bed, to have visions of sugarplums dance in their heads.

Two hours of Christmasmania. Every day.

Here’s the really insidious thing about Christmasmania. It’s not that the decorations go up during Halloween. It’s not that Starbucks has eggnog flavoured napkins before Remembrance and Veterans’ Day. It’s not that the same garbage Christmas songs are recycled and re-recorded by the pop-royalty-du-jour and pumped out of every shopping centre speaker before Americans even have a chance to be thankful. It’s not the over commercialized nature of “finding the perfect gift for that special someone.”

No, what’s really insidious about Christmasmania is how self-perpetuating and reinforcing it is. For the Christmasmania virus to survive, it must take control of its host, but not kill its host.

Christmasmania, also known as “the Holiday Spirit,” requires its hosts to keep one another in line. Every single of the numerous Christmas movies (of which Christmasmania dictates we watch at least a few) has at least one social outcast, the “grinch”, who simply does not like Christmas. We are taught to despise this grinch, to pity this grinch, and to rehabilitate the grinch so that he or she can see the “true meaning of Christmas” and get into the “Holiday Spirit.” “If you don’t like Christmas,” the mania tells us, “there’s something wrong with you, because nothing can be wrong with Christmas. Do you like giving?” Don’t you like shopping?

I think Christmas can be a wonderful celebration, a special time to be close with your family and loved ones you might otherwise not get throughout the rest of the year, and that’s a great thing. But the problem is when we as a whole are programmed and forced to buy in to the mania that surrounds it, the celebration becomes lost in a morass of stop-motion candy-cane flavoured Bing Crosby songs. So this Christmas remember your loved ones. They’re the real present.

The next year I visited Seymour Papert, Wally Feurzig, and Cynthia Solomon to see the LOGO classroom experience in the Lexington schools. This was a revelation! And was much more important to me than the metaphors of “tools” and “vehicles” that were central to the ARPA way of characterizing its vision. This was more like the “environment of powerful epistemology” of Montessori, the “environment of media” of McLuhan, and even more striking: it evoked the invention of the printing press and all that it brought. This was not just “augmenting human intellect”, but the “early shaping of human intellect”. This was a “cosmic service idea”. […]

At this first brush, the service model was: facilitate children “learning the world by constructing it” via an interactive graphical interface to an “object- oriented-simulation-oriented-LOGO-like-language.

A few years later at Xerox PARC I wrote “A Personal Computer For Children Of All Ages”. This was written mostly to start exploring in more depth the desirable services that should be offered. I.e. what should a Dynabook enable? And why should it enable it?

The first context was “everything that ARPA envisioned for adults but in a form that children could also learn and use”. The analogy here was to normal language learning in which children are not given a special “children’s language” but pick up speaking, reading and writing their native language directly through subsets of both the content and the language. In practice for the Dynabook, this required inventing better languages and user interfaces for adults that could also be used for children (this is because most of the paraphernalia for adults in those days was substandard for all). […]

Back then, it was in the context that “education” meant much more than just competing for jobs, or with the Soviet Union; how well “real education” could be accomplished was the very foundation of how well a democratic federal republic could carry out its original ideals.

[Thomas] Jefferson’s key idea was that a general population that has learned to think and has acquired enough knowledge will be able to dynamically steer the “ship of state” through the sometimes rough waters of the future and its controversies (and conversely, that the republic will fail if the general population is not sufficiently educated).

An important part of this vision was that the object of education was not to produce a single point of view, but to produce citizens who could carry out the processes of reconciling different points of view.

If most Americans today were asked “why education?”, it’s a safe bet that most would say “to help get a good job” or to “help make the US more competitive worldwide” (a favorite of our recent Presidents). Most would not mention the societal goal of growing children into adults who will be “enlightened enough to exercise their control with a wholesome discretion” or to understand that they are the “true corrective of abuses of … power”.

Research shows that girls who play with fashion dolls see fewer career options for themselves than boys (see study) . One fashion doll is sold every three seconds. Girls’ feet are made for high-tops, not high heels…it’s time for change.

Scientists escape to a large extent from simple belief by having done enough real experimentation, modeling building using mathematics that suggests new experiments, etc., to realize that science is more like map-making for real navigators than bible-making: IOW, the maps need to be as accurate as possible with annotations for errors and kinds of measurements, done by competent map-makers rather than story tellers, and they are always subject to improvement and rediscovery: they never completely represent the territory they are trying to map, etc.

Many of us who having been learning how to help children become scientists (that is to be able to think and act as scientists some of the time) have gathered evidence which shows that helping children actually do real science at the earliest possible ages is the best known way to help them move from simple beliefs in dogma to the more skeptical, empirically derived models of science.[…]

There is abundant evidence that helping children move from human built-in heuristics and the commonsense of their local culture to the “uncommonsense” and heuristic thinking of science, math, etc., is best done at the earliest possible ages. This presents many difficulties ranging from understanding how young children think to the very real problem that “the younger the children, the more adept need to be their mentors (and the opposite is more often the case)”.[…]

So, for young and youngish children (say from 4 to 12) we still have a whole world of design problems. For one thing, this is not an homogenous group. Cognitively and kinesthetically it is at least two groups (and three groupings is an even better fit). So, we really think of three specially designed and constructed environments here, where each should have graceful ramps into the next one.

The current thresholds exclude many designs, but more than one kind of design could serve. If several designs could be found that serve, then we have a chance to see if the thresholds can be raised. This is why we encourage others to try their own comprehensive environments for children. Most of the historical progress in this area has come from a number of groups using each other’s ideas to make better attempts (this is a lot like the way any science is supposed to work). One of the difficulties today is that many of the attempts over the last 15 or so years have been done with too low a sense of threshold and thus start to clog and confuse the real issues.

I think one of the trickiest issues in this kind of design is an analogy to the learning of science itself, and that is “how much should the learners/users have to do by themselves vs. how much should the curriculum/system do for them?” Most computer users have been mostly exposed to “productivity tools” in which as many things as possible have been done for them. The kinds of educational environments we are talking about here are at their best when the learner does the important parts by themselves, and any black or translucent boxes serve only on the side and not at the center of the learning. What is the center and what is the side will shift as the learning progresses, and this has to be accommodated.

OTOH, the extreme build it from scratch approach is not the best way for most minds, especially young ones. The best way seems to be to pick the areas that need to be from scratch and do the best job possible to make all difficulties be important ones whose overcoming is the whole point of the educational process (this is in direct analogy to how sports and music are taught – the desire is to facilitate a real change for the better, and this can be honestly difficult for the learner).

Deceptive linking practices – from big flashing “download now” buttons hovering above actual download links, to disguising links to advertising by making them indistinguishable from content links – may not initially seem like violations of user consent. However, consent must be informed to be meaningful – and “consent” obtained by deception is not consent.

Consent-challenging approaches offer potential competitive benefits. Deceptive links capture clicks – so the linking site gets paid. Harvesting of emails through automatic opt-in aids in marketing and lead generation. While the actual corporate gain from not allowing unsubscribes is likely minimal – users who want to opt out are generally not good conversion targets – individuals and departments with quotas to meet will cheer the artificial boost to their mailing list size.

These perceived and actual competitive advantages have led to violations of consent being codified as best practices, rendering them nigh-invisible to most tech workers. It’s understandable – it seems almost hyperbolic to characterize “unwanted email” as a moral issue. Still, challenges to boundaries are challenges to boundaries. If we treat unwanted emails, or accidentally clicked advertising links, as too small a deal to bother, then we’re asserting that we know better than our users what their boundaries are. In other words, we’re placing ourselves in the arbiter-of-boundaries role which abuse culture assigns to “society as a whole.”[…]

The industry’s widespread individual challenges to user boundaries become a collective assertion of the right to challenge – that is, to perform actions which are known to transgress people’s internally set or externally stated boundaries. The competitive advantage, perceived or actual, of boundary violation turns from an “advantage” over the competition into a requirement for keeping up with them.

Individual choices to not fall behind in the arms race of user mistreatment collectively become the deliberate and disingenuous cover story of “but everyone’s doing it.”[…]

The hacker mythos has long been driven by a narrow notion of “meritocracy.” Hacker meritocracy, like all “meritocracies,” reinscribes systems of oppression by victim-blaming those who aren’t allowed to succeed within it, or gain the skills it values. Hacker meritocracy casts non-technical skills as irrelevant, and punishes those who lack technical skills. Having “technical merit” becomes a requirement to defend oneself online. […]

It’s easy to bash Zynga and other manufacturers of cow clickers and Bejeweled clones. However, the mainstream tech industry has baked similar compulsion-generating practices into its largest platforms. There’s very little psychological difference between the positive-reinforcement rat pellet of a Candy Crush win and that of new content in one’s Facebook stream.[…]

I call on my fellow users of technology to actively resist this pervasive boundary violation. Social platforms are not fully responsive to user protest, but they do respond, and the existence of actual or potential user outcry gives ethical tech workers a lever in internal fights about user abuse.

Facebook Rooms. Inspired by both the ethos of these early web communities and the capabilities of modern smartphones, today we’re announcing Rooms, the latest app from Facebook Creative Labs. Rooms lets you create places for the things you’re into, and invite others who are into them too.[…]

Not only are rooms dedicated to whatever you want, room creators can also control almost everything else about them. Rooms is designed to be a flexible, creative tool. You can change the text and emoji on your like button, add a cover photo and dominant colors, create custom “pinned” messages, customize member permissions, and even set whether or not people can link to your content on the web. In the future, we’ll continue to add more customizable features and ways to tweak your room. The Rooms team is committed to building tools that let you create your perfect place. Our job is to empower you.

My guess is Rooms is a more strategic move to try and attract teens who seek privacy in apps like Snapchat. Seems like a good place for a clique.

A lot of computing pioneers — the people who programmed the first digital computers — were women. And for decades, the number of women studying computer science was growing faster than the number of men. But in 1984, something changed. The percentage of women in computer science flattened, and then plunged, even as the share of women in other technical and professional fields kept rising.

The number one question I hear from iOS developers when Swift comes up is “Should you switch to Swift?” and my answer for that is “Probably yes.” It’s of course not a black and white answer and depends on your situation, but if you’re an experienced Objective C programmer, now is a great time to start working in Swift.

Should you learn Swift or Objective C?

For newcomers to the iOS development platform, I reckon the number one question asked is “Should I learn with Objective C or with Swift?” Contrary to what some may say (e.g., Big Nerd Ranch), I suggest you start learning with Swift first.

The main reason for this argument is cruft: Swift doesn’t have much and Objective C has a whole bunch. Swift’s syntax is much cleaner than Objective C which means a beginner won’t get bogged down with unnecessary details that would otherwise trip them up (e.g., header files, pointer syntax, etc.).

I used to teach a beginners iOS Development course and while most learners could grasp the core concepts easily, they were often tripped up by the implementation details of Objective C oozing out the seams. “Do I need a semicolon here? Why do I need to copy and paste this method declaration? Why don’t ints need a pointer star? Why do strings need an @ sign?” The list goes on.

When you’re learning a new platform and a new language, you have enough of an uphill battle without having to deal with the problems of a 1980s programming language.

In place of Objective C’s header files, importing, and declaring of methods, Swift has just one file with a single declaration and implementation of methods, with no need to import files within the same module. There goes all that complexity right out the window. In place of Objective C’s pointer syntax, in Swift both reference and value types use the same syntax.

Learning Xcode and Cocoa and iOS development all at once is a monumental task, but if you learn it with Swift first you’ll have a much easier time taking it all in.

Swift is ultimately a bigger language than Objective C, with features like advanced enums, Generics/Templates, tuples, operator overloading, etc. There is more Swift to learn but Cocoa was written in Objective C and it doesn’t make use of these features, so they’re not as essential for doing iOS development today. It’s likely that in the coming years Cocoa will adopt more Swift language features, so it’s still good to be familiar with them, but the fact is learning a core amount of Swift is much more straightforward than learning a core amount of Objective C.

A Note about Learning Programming with Swift

I should point out I’m not necessarily advocating for learning Swift as your first programming language, however, but instead suggesting if you’re a developer who’s new to iOS development, you should start with Swift.

If you’re new to programming, there are many better languages for learning, like Lisp, Logo, or Ruby to name just a few. You may very well be able to cut your teeth learning programming with Swift, but it’s not designed as a learning language and has a programming mental model of the “you are a programmer managing computer resources” kind.

Learning Objective C

You should start out learning iOS development with Swift, but once you become comfortable, you should learn Objective C too.

Objective C has been the programming language for iOS since its inception, so there’s lots of it out there in the real world, including books, blog posts, and other projects and frameworks. It’s important to know how to read and write Objective C, but the good news is once you’ve become decent with Swift, programming with Objective C isn’t much of a stretch.

Although their syntaxes differ in some superficial ways, the kind of code you write is largely the same between the two. -viewDidLoad and viewDidLoad() may be implemented in different syntaxes, but what you’re trying to accomplish is basically the same in either case.

The difficult part about learning Objective C after learning Swift, then, is not learning Cocoa and its concepts but instead the earlier mentioned syntactic salt that comes with the language. Because you already know a bit about view controllers and gesture recognizers, you’ll have a much easier time figuring out the oddities of a less modern syntax than you would have if you tried to learn them both at the same time. It’s much easier to adapt this way.

View controllers become gargantuan because they’re doing too many things. Keyboard management, user input, data transformation, view allocation — which of these is really the purview of the view controller? Which should be delegated to other objects? In this post, we’ll explore isolating each of these responsiblities into its own object. This will help us sequester bits of complex code, and make our code more readable.

On May 8 2014, after many long months of work, we finally shipped Hopscotch 2.0, which was a major redesign of the app. Hopscotch is an interactive programming environment on the iPad for kids 8 and up, and while the dedicated learners used our 1.0 with great success, we wanted to make Hopscotch more accessible for more kids who may have otherwise struggled. Early on, I pushed for a rethinking of the mental model we wanted to present our programmers so they could better grasp the concept. While I pushed some of the core ideas, this was of course a complete team effort. Every. Single. Member. of our (admittedly small!) team contributed a great deal over many long discussions and long days building the app.

What follows is an examination of mental models, and the models used in various versions of Hopscotch.

Mental models

The human brain is 100000 year old hardware we’re relatively stuck with. Applications are software created by 100000 year old hardware. I don’t know which is scarier, but I do know it’s a lot easier to adapt the software than it is to adapt the hardware. A mental model is how you adapt your software to the human brain.

A mental model is a device (in the “literary device” sense of the word) you design for humans to use, knowingly or not, to better grasp concepts and accomplish goals with your software. Mental models work not by making the human “play computer” but by making the computer “play human,” thus giving the person a conceptual framework to think in while using the software.

The programming language Logo uses the mental model of the Turtle. When children program the Turtle to move in a circle, they teach it in terms of how they would move in a circle (“take a step, turn a bit, over and over until you make a whole circle”) (straight up just read Mindstorms).

There are varying degrees of success in a program’s mental model, usually correlating to the amount of thought the designers put into the model itself. A successful mental model results in the person having a strong connection with the software, where a weak mental model leaves people confused. The model of hierarchical file systems (e.g., “files and folders”) has long been a source of consternation for people because it forces them to think like a computer to locate information.

You may know your application’s mental model very intimately because you created it but most people will not be so fortunate when they start out. The easiest way to understand your application’s mental model is by having smaller leaps to make—for example, most iPhone apps are far more alike than they are different—so people they don’t have to tread too far into the unknown.

One of the more effective tricks we employ in graphical user interfaces is the spatial analogy. Views push and pop left and right on the screen, suggesting the application exists in a space extending beyond the bounds of the rectangle we stare at. Some applications offer a spatial analogy in terms of a zooming interface, like a Powers of Ten but for information (or “thought vectors in concept space”, to quote Engelbart) (see Jef Raskin’s The Humane Interface for a thorough discussion on ZUIs).

These spatial metaphors can be thought of as gestures in the Raskinian sense of the term (defined as “…an action that can be done automatically by the body as soon as the brain ‘gives the command’. So Cmd+Z is a gesture, as is typing the word ‘brain’”) where instead of acting, the digital space provides a common, habitual environment for performing actions. There is no Raskinian mode switch because the person already has familiarity with the space.

Hopscotch 1.x

Following in the footsteps of the Logo turtle, Hopscotch characters are programmed in the same egocentric mental model (here’s a video of programming one character in Hopscotch 1.0). If I want Bear to move in a circle, I first ponder how I would move in a circle and translate this to Hopscotch blocks. If this were all to the story, this mental model would be pretty sufficient. But Hopscotch projects can have multiple programmed characters executing at the same time. Logo’s model works well because it’s clear there is one turtle to one programmer, but when there are multiple characters to take care of, it’s conceptually more of a stretch to program them all this way.

Hopscotch 1.0 was split diametrically between the drag and drop code blocks for various the various characters in your project and the Stage, the area where your program executes and your characters wiggle their butts off, as directed. This division is quite similar to the “write code / execute program” model most programming environments provide developers, but that doesn’t mean it’s appropriate (for children or professionals). Though the characters were tangible (tappable!) on the Stage, they remained abstract in the code editor. Simply put, there wasn’t a strong connection between your code and your program. This discord made it very difficult for beginners to connect their code to their characters.

Hopscotch 2.x

In the redesign, we unified the Stage-as-player with the Stage-as-Editor. In the original version, you programmed your characters by switching between tabs of code, but in the redesign you see your characters as they appear on the Stage. Gone are the two distinct modes, instead you just have the Stage, which you can edit. This means you no longer position characters with a small graph, but instead pick up your characters and place them directly.

The code blocks, which used to live in the tabs now lives inside the characters themselves. This gives a stronger mental model of “My character knows how to draw a circle because I programmed her directly”. When you tap a character you see a list of “Rules” appear as thought bubbles beside the character. Rules are mini-programs for each character that are played for different events (e.g., “When the iPad is tilted, make Bear dance") and you edit their code by tapping into them. This concept attaches the abstract concept of “code” to the very spatial and tangible characters you’re trying to program, and we found beginners could grasp this concept much quicker than the original model.

Along the way, we added “little things” like custom functions and a mini code preview that highlights code blocks as it executes, to let programmers quickly see the results of their changes for the character they’re programming. These aren’t additions to the mental model per se, but they do help close the gap between abstract code and your characters following your program.

A mental framework

A strong mental model benefits the people using your software because it helps them meet both ways. But mental models also help you as a designer to understand the messages you send through your application. By rethinking our mental model for Hopscotch, we dramatically improved both how we build the program and how people use it, and it’s given us a framework to think in for the future. As you build or use applications, be aware of the signals you send and receive and it will help you understand the software better.

After playing with Swift in my spare time for most of the Summer and after now using Swift full time at Hopscotch for about a month now, I thought I’d share some of my thoughts on the language.

The number one question I hear from iOS developers when Swift comes up is “Should you switch to Swift?” and my answer for that is “Probably yes.” It’s of course not a black and white answer and depends on your situation, but if you’re an experienced Objective C programmer, now is a great time to start working in Swift. I would suggest switching to it full time for all new iOS work (I wouldn’t recommend going back and re-writing your old Objective C code, but maybe replace bits and pieces of it as you see fit).

Idiomatic Swift

One reason I hear for developers wanting to hold off is “Swift is so new there aren’t really accepted idioms or best practices yet, so I’m going to wait a year or two for those to emerge.” I think that’s a fair argument, but I’d argue it’s better for you to jump in and invent them now instead of waiting for somebody else to do it.

I’m pretty sure when I look back on my Swift code a year from now I’ll cringe from embarrassment, but I’d rather be figuring it all out now, I’d rather be helping to establish what Good Swift looks like than just see what gets handed down. The conventions around the language are malleable right now because nothing has been established as Good Swift yet. It’s going to be a lot harder to influence Good Swift a year or two from now.

And the sooner you become productive in Swift, the sooner you’ll find areas where it can be improved. Like the young Swift conventions, Swift itself is a young language—the earlier in its life you file Radars and suggest improvements to the language, the more likely those improvements will be made. Three years from now Swift The Language is going to be a lot less likely to change compared to today. Your early radars today will have enormous effects on Swift in the future.

Swift Learning Curve

Another reason I hear about not wanting to learn Swift today is not wanting to take a major productivity hit while learning the language. From my experience, as an experienced iOS developer is you’ll be up to speed in a week or two with Swift, and then you’ll get all the benefits of Swift (even just not having header files or having to import files all over the place makes programming in Swift so much nicer than Objective C, you might not want to go back).

In that week or two when you’re a little slower at programming than you are with Objective C, you’ll still be pretty productive anyway. You certainly won’t become an expert in Swift (because nobody except maybe Chris Lattner is yet anyway!) right away, but you’ll be writing arguably cleaner code, and, you might even have some fun doing it.

Grab Bag of Various Caveats

I don’t fully understand initializers in Swift yet, but I kind of hate them. I get the theory behind them, that everything strictly must be initialized, but in practice this super sucks. This solves a problem I don’t think anybody really had.

Compile times for large projects suck. They’re really slow, because (I think) any change in any Swift file causes all the Swift files to be recompiled on build. My hunch is by breaking your project up into smaller Modules (aka Frameworks) this should relieve the slow build times. I haven’t tried this yet.

The Swift debugger feels pretty non-functional most of the time. I’m glad we have Playgrounds to test out algorithms and such, but unfortunately I’ve had to mainly resort to pooping out println()s of debug values.

What the hell is with the name println()? Would it have killed them to actually spell out the word printLine()? Do the language designers know about autocomplete?

The “implicitly unwrapped optional operator” (!) should really either be called the “subversion operator” or the “crash operator.” The major drumbeat we’ve been told about Swift is it’s supposed to not allow you to do unsafe things, hence (among other things) we have Optionals. By using the implicitly unwrapping the optional, we’re telling the compiler “I know better than you right now, so I’m just going to go ahead and subvert the rules and pretend this thing isn’t nil, because I know it’s not.” When you do this, you’re either going to be correct, in which case Swift was wrong for thinking a value was maybe going to be nil when it isn’t; or you’re going to be incorrect and cause your application to crash.

Objective Next

Earlier this year, before Swift was announced, I published an essay, Objective Next which discussed replacing Objective C, both in terms of the language itself and, more importantly, what we should thirst for in a successor:

We don’t need a better Objective C; we need a better way to make software. We do that in two steps: figure out what we’re trying to accomplish, and then figure out how to accomplish it. It’s simple, but nearly every post about replacing Objective C completely ignores these two steps.

In short, a replacement for Objective C that just offers a slimmed down syntax isn’t really a real victory at all. It’s simply a new-old thing. It’s a new way to accomplish the exact same kind of software. In a followup essay, I wrote:

If there was one underlying theme of the essay, it was “Don’t be trapped by Instrumental Thinking”, that particularly insidious kind of thinking that plagues us all (myself included) to thinking about new ideas or technologies only in terms of what we’re currently doing. That is, we often can only see or ask for a new tool to benefit exactly the same job we’re currently doing, where instead we should consider new kinds of things it might enable us to do. […]

When talking in terms of Objective C development, I don’t mean “I’m dreaming of a replacement that’ll just let you create the exact same identical apps, it’ll just have fewer warts,” but I instead mean I’m dreaming of a new, fundamental way to approach building software, that will result in apps richer in the things we care about, like visual and graphic design, usability and interaction, polish, and yes, offer enhancements to the undercarriage, too.

This, unfortunately, is exactly what we got with Swift. Swift is a better way to create the exact same kind of software we’ve been making with Objective C. It may crash a little less, but it’s still going to work exactly the same way. And in fact, because Swift is far more static than Objective C, we might even be a little bit more limited in terms of what we can do. For example, as quoted in a recent link:

The quote in this post’s title [“It’s a Coup”], from Andrew Pontious, refers to the general lack of outrage over the loss of dynamism. In broad strokes, the C++ people have asserted their vision that the future will be static, and the reaction from the Objective-C crowd has been apathy. Apple hasn’t even really tried to make a case for why this U-turn is a good idea, and yet the majority seems to have rolled over and accepted it, anyway.

I still think Swift is a great language and you should use it, but I do find it lamentably not forward-thinking. The intentional lack of a garbage collector really sealed the deal for me. Swift isn’t a new language; it’s C++++. I am glad to get to program in it, and I think the more people using it today, the better it will be tomorrow.

I’ve had this thought stuck in my head for a few months about Beliefs I thought might be useful to share. The thought goes something like this:

A belief about something is scaffolding we should use until we’ve learned more truths about that something.

First, I should point out I don’t think this statement is necessarily entirely true (though it could be), but I do think it’s a useful starting point for a discussion. Second, I also don’t think this view on belief is widely practiced, but I do think it would make for more productive use of beliefs themselves.

We humans tend to be a very belief-based bunch. There are the obvious beliefs like religion and other similar deifications (“What would our forefathers think?”) but we hold strong beliefs all the time without even realizing it.

The public education systems in North America (as I experienced firsthand in Canada and as I’ve read about in America) are based on students believing and internalizing a finite set of “truths” (this is known as a curriculum) and taking precisely those beliefs as granted.

Science presents perhaps the best evidence we’re largely a belief-based species as science exists to seek truths our beliefs are not adapted to explaining. Before the invention of science, we relied on our beliefs to make sense of the world as best we could, but beliefs painted a blurry, monochromatic picture at best. Science is hard because it has to be hard—its job is to adapt parts of the universe which we can’t intuit into something we can place in our concept of reality—but it does a much superior job at explaining reality than our beliefs do.

A friend of mine recently told me “I have beliefs about the world just like everybody else…I just don’t trust them, is all” I think that’s a productive way to think about beliefs. It would probably be impossible to rid the world of belief, but I think a better approach is to acknowledge and understand belief as a useful, temporary tool. We should teach people to think about belief as a useful means to an end, as a support system, until more is learned about something. Most importantly, we should teach that beliefs should have a shelf-life, and not be permanently trusted.

Most of the problems of software arise because it is too complicated for humans to handle. We believe that much of this complexity is unnecessary and indeed self-inflicted. We seek to radically simplify software and programming. […]

We should measure complexity as the cumulative cognitive effort to learn a technology from novice all the way to expert. One simple surrogate measure is the size of the documentation. This approach conflicts with the common tendency to consider only the efficiency of experts. Expert efficiency is hopelessly confounded by training and selection biases, and often justifies making it harder to become an expert. We are skeptical of “expressive power” and “terseness”, which are often code words for making things more mathematical and abstract. Abstraction is a two-edged sword.

The quote in this post’s title, from Andrew Pontious, refers to the general lack of outrage over the loss of dynamism. In broad strokes, the C++ people have asserted their vision that the future will be static, and the reaction from the Objective-C crowd has been apathy. Apple hasn’t even really tried to make a case for why this U-turn is a good idea, and yet the majority seems to have rolled over and accepted it, anyway.

Why don’t more people question things? What does it mean to question things? What kinds of things do we need to question? What kinds of answers do we hope to find from those questions? What sort of questions are we capable of answering? How do we answer the rest of the questions? Would it help if more people read books? Why does my generation, self included, insist of not reading books? Why do we insist on watching so much TV? Why do we insist on spending so much time on Twitter or Facebook? Why do I care so much how many likes or favs a picture or a post gets? What does it say about a society driven by that? Why are we so obsessed with amusing ourselves to death? Why are there so many damn photo sharing websites and todo applications? Is anybody even reading this? How do we make the world a better place? What does it mean to make the world a better place? Why do we think technology is the only way to accomplish this? Why are some people against technology? Do these people have good reasons for what they believe? Are we certain our reasons are better? Can we even know that for sure? What does it mean to know something for sure? Do computers cause more problems than they solve? Will the world be a better place if everyone learns to program? If we teach all the homeless people Javascript will they stop being homeless? What about the sexists and the racists and the fascists and the homophobes? Who else can help? How do we get all these people to work together? How do we teach them? How can we let people learn in better ways? How can we convince people to let go of their strategies adapted for the past and instead focus on the future? Why are there so many answers to the wrong questions?

I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. And, humans came in with a rather unimpressive showing, about a third of the way down the list. It was not too proud a showing for the crown of creation. So, that didn’t look so good. But, then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And, a man on a bicycle, a human on a bicycle, blew the condor away, completely off the top of the charts.

And that’s what a computer is to me. What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds.

[H]ow do you ever migrate from a tricycle to a bicycle? A bicycle is very unnatural and hard to learn compared to a tricycle, and yet in society it has superseded all the tricycles for people over five years old. So the whole idea of high-performance knowledge work is yet to come up and be in the domain. It’s still the orientation of automating what you used to do instead of moving to a whole new domain in which you are obviously going to learn quite a few new skills.

[Engelbart]: ‘Someone can just get on a tricycle and move around, or they can learn to ride a bicycle and have more options.’

This is Engelbart’s favourite analogy. Augmentation systems must be learnt, which can be difficult; there is resistance to learning new techniques, especially if they require changes to the human system. But the extra mobility we could gain from particular technical objects and techniques makes it worthwhile.

The great thing about a bike is that it doesn’t wither your physical attributes. It takes everything you’ve got, and it amplifies that! Whereas an automobile puts you in a position where you have to decide to exercise. We’re bad at that because nature never required us to have to decide to exercise. […]

So the idea was to try to make an amplifier, not a prosthetic. Put a prosthetic on a healthy limb and it withers.

There seems to be belief among software developers nowadays that providing instructions indicates a failure of design. It isn’t. Providing instructions is a recognition that your users have different backgrounds and different ways of thinking. A feature that’s immediately obvious to User A may be puzzling to User B, and not because User B is an idiot.

You may not believe this, but when the Macintosh first came out everything about the user interface had to be explained.

Agreed. Of course you have to have a properly labeled interface, but that doesn’t mean you can’t have more powerful features explained in documentation. The idea that everything should be “intuitive” is highly toxic to creating powerful software.

My subject was an intelligent, computer-literate, university-trained teacher visiting from Finland who had not seen a mouse or any advertising or literature about it. With the program running, I pointed to the mouse, said it was “a mouse”, and that one used it to operate the program. Her first act was to lift the mouse and move it about in the air. She discovered the ball on the bottom, held the mouse upside down, and proceeded to turn the ball. However, in this position the ball is not riding on the position pick-offs and it does nothing. After shaking it, and making a number of other attempts at finding a way to use it, she gave up and asked me how it worked. She had never seen anything where you moved the whole object rather than some part of it (like the joysticks she had previously used with computers): it was not intuitive. She also did not intuit that the large raised area on top was a button.

But once I pointed out that the cursor moved when the mouse was moved on the desk’s surface and that the raised area on top was a pressable button, she could immediately use the mouse without another word. The directional mapping of the mouse was “intuitive” because in this regard it operated just like joysticks (to say nothing of pencils) with which she was familiar.

From this and other observations, and a reluctance to accept paranormal claims without repeatable demonstrations thereof, it is clear that a user interface feature is “intuitive” insofar as it resembles or is identical to something the user has already learned. In short, “intuitive” in this context is an almost exact synonym of “familiar.”

And

The term “intuitive” is associated with approval when applied to an interface, but this association and the magazines’ rating systems raise the issue of the tension between improvement and familiarity. As an interface designer I am often asked to design a “better” interface to some product. Usually one can be designed such that, in terms of learning time, eventual speed of operation (productivity), decreased error rates, and ease of implementation it is superior to competing or the client’s own products. Even where my proposals are seen as significant improvements, they are often rejected nonetheless on the grounds that they are not intuitive. It is a classic “catch 22.” The client wants something that is significantly superior to the competition. But if superior, it cannot be the same, so it must be different (typically the greater the improvement, the greater the difference). Therefore it cannot be intuitive, that is, familiar. What the client usually wants is an interface with at most marginal differences that, somehow, makes a major improvement. This can be achieved only on the rare occasions where the original interface has some major flaw that is remedied by a minor fix.

Nobody knew how to use an iPhone before they saw someone else do it. There’s nothing wrong with more powerful software that requires a user to learn something.

Great, so dividing labour must be a good thing, right? That’s why a totally post-Smith industry like producing software has such specialisations as:

full-stack developer

Oh, this argument isn’t going the way I want. I was kindof hoping to show that software development, as much a product of Western economic systems as one could expect to find, was consistent with Western economic thinking on the division of labour. Instead, it looks like generalists are prized.

On market demand:

It’s not that there’s no demand, it’s that the demand is confused. People don’t know what could be demanded, and they don’t know what we’ll give them and whether it’ll meet their demand, and they don’t know even if it does whether it’ll be better or not. This comic strip demonstrates this situation, but tries to support the unreasonable position that the customer is at fault over this.

Just as using a library is a gamble for developers, so is paying for software a gamble for customers. You are hoping that paying for someone to think about the software will cost you less over some amount of time than paying someone to think about the problem that the software is supposed to solve.

But how much thinking is enough? You can’t buy software by the bushel or hogshead. You can buy machines by the ton, but they’re not valued by weight; they’re valued by what they do for you. So, let’s think about that. Where is the value of software? How do I prove that thinking about this is cheaper, or more efficient, than thinking about that? What is efficient thinking, anyway?

I think you can answer this question if you frame most modern software as “entertainment” (or at least, Apps are Websites). It’s certainly not the case that all software is entertainment, but perhaps for the vast majority of people software as they know it is much closer to movies and television than it is to references or mental tools. The only difference is, software has perhaps completed the ultimate wet dream of the entertainment market in that Pop Software doesn’t even really have personalities like music or TV do — the personality is solely that of the brand.

I started working on a side project in January 2014 and like many of my side projects over the years, after an initial few months of vigorous work, the last little while has been mostly off and on work on the project.

The typical list of explanations applies: work gets in the way (work has been a perpetual crunch mode for months now), the project has reached a big enough size that it’s hard to make changes (I’m on an unfamiliar platform), and I’m stuck at a particularly difficult problem (I saved the best for last!).

Since the summer has been more or less fruitless while working on this project, I’m taking a different approach going forward, one I’ve used to some success in the past. It comes down to three main things:

Focus the work to one hour per day, usually in the morning. This causes me to get at least something done once every day, even if it’s just small or infrastructure work. I’ve found limiting to a small amount of time (two hours works well too) also forces me to not procrastinate or get distracted while I’m working. The hour of side project becomes precious and not something to waste.

Stop working when you’re in the middle of something so you have somewhere to ramp up with next time you start (I’m pretty sure this one is cribbed directly from Ernest Hemingway).

Keep a diary for your work. I do this with most of my projects by just updating a text file every day after I’m finished working with what my thoughts were for the day. I usually write about what I worked on and what I plan on working on for the next day. This complements step 2 because it lets me see where I left off and what I was planning on doing. It also helps bring any subconscious thoughts about the project into the front of my brain. I’ll usually spend the rest of the day thinking about it, and I’ll be eager to get started again the next day (which helps fuel step 1, because I have lots of ideas and want to stay focused on them — it forces me to work better to get them done).

That, and I’ve set a release date for myself, which should hopefully keep me focused, too.

You have been taught to use Microsoft Word and the World Wide Web as if they were some sort of reality dictated by the universe, immutable “technology” requiring submission and obedience.

But technology, here as elsewhere, masks an ocean of possibilities frozen into a few systems of convention.

Inside the software, it’s all completely arbitrary. Such “technologies” as Email, Microsoft Windows and the World Wide Web were designed by people who thought those things were exactly what we needed. So-called “ICTs"– “Information and Communication Technologies,” like these– did not drop from the skies or the brow of Zeus. Pay attention to the man behind the curtain! Today’s electronic documents were explicitly designed according to technical traditions and tekkie mindset. People, not computers, are forcing hierarchy on us, and perhaps other properties you may not want.

Research in cognitive science suggests that, while it is important to teach to the strengths of the brain (by allowing students to explore and discover concepts on their own), it is also important to take account of the weaknesses of the brain. Our brains are easily overwhelmed by too much new information, we have limited working memories, we need practice to consolidate skills and concepts, and we learn bigger concepts by first mastering smaller component concepts and skills.

Teachers are often criticized for low test scores and failing schools, but I believe that they are not primarily to blame for these problems. For decades teachers have been required to use textbooks and teaching materials that have not been evaluated in rigorous studies. As well, they have been encouraged to follow many practices that cognitive scientists have now shown are counterproductive. For example, teachers will often select textbooks that are dense with illustrations or concrete materials that have appealing features because they think these materials will make math more relevant or interesting to students. But psychologists such as Jennifer Kaminski have shown that the extraneous information and details in these teaching tools can actually impede learning.

When work piles up, my brain doesn’t have any idle cycles. It jumps directly from one task to another, so there’s no background processing. No creativity! And it feels like all the color and life has been sucked out of the world.

I don’t mind being stressed or doing lots of work or losing sleep, but I’ve been noticing that I’m a boring person when it happens!

…Except for the status bar — that’s Helvetica Neue. And share sheets. And Alerts. And in action sheets. Oh, and in the swipe-the-cell UI in iOS 8. In fact any stock UI with text baked in is pretty much going to use Helvetica Neue in red, black, and blue. Hope you like it.

Maybe this is about consistency of experience. Perhaps Apple thinks that people with bad taste will use an unreadable custom font in a UIAlert and confuse users.

I agree with Dave the lack of total control is vexing, but I think that’s because with these system features, alert views and the status bar, Apple wants us to treat them more or less like hardware. They’re immutable, they “come from the OS” like they’re appearing on Official iOS Letterhead paper.

This is why I think Apple doesn’t want us customizing these aspects of iOS. They want to keep the “official” bits as untampered with as possible.

The Apple developer community is atwitter this week about independent developers and whether or not they can earn a good living working independently on the Mac and or iOS platforms. It’s a great discussion about an unfortunately bleak topic. It’s sad to hear so many great developers, working on so many great products, are doing so poorly from it. And it seems like a lot of it is mostly out of their control (if I thought I knew a better way, I’d be doing it!). David Smith summarizes most of the discussion (with an awesome list of links):

It has never been easy to make a living (whatever that might mean to you) in the App Store. When the Store was young it may have been somewhat more straightforward to try something and see if it would hit. But it was never “easy”. Most of my failed apps were launched in the first 3 years of the Store. As the Store has matured it has also become a much more efficient marketplace (in the economics sense of market). The little tips and tricks that I used to be able to use to gain an ‘unfair’ advantage now are few and far between.

The basic gist seems to be “it’s nearly impossible to make a living off iOS apps and it’s possible but still pretty hard to do off OS X.” Most of us I think would tend to agree you can charge more for OS X software than you can for iOS because OS X apps are usually “bigger” or more fleshed out, but I think that’s only half the story.

The real reason why it’s so hard to sell iOS apps is that iOS apps are really just websites. Implementation details aside, 95 per cent of people think of iOS apps the same way they think about websites. Websites that most people are exposed to are mostly promotional, ad-laden and most importantly, free. Most people do not pay for websites. A website is just something you visit and use, but it isn’t a piece of software, and this is the exact same way they think of and treat iOS apps. That’s why indie developers are having such a hard time making money.

(Just so we’re clear, I’ve been making iOS apps the whole duration of the App Store and I know damn well iOS apps are not “websites.” I’m well aware they are contained binaries that may-or-may-not use the internet or web services. I’m talking purely about perceptions here)

For a simple test, ask any of your non-developer friends what the difference between an “app” and an “application” or “program” is and I’d be willing to bet they think of them as distinct concepts. To most people, “apps” are only on your phone or tablet, and programs are bigger and on your computer. “Apps” seem to be a wholly different category of software from programs like Word or Photoshop, and the idea that Mac and iOS apps are basically the same on the inside doesn’t really occur to people (nor does it need to, really). People “know” apps aren’t the same thing as programs.

Apps aren’t really “used” so much as they are “checked” (how often do people “check Twitter” vs “use Twitter”?) which is usually a brief “visit” measured in seconds (of, ugh, “engagement”). Most apps are used briefly and fleetingly, just like most websites. iOS, then, isn’t so much an operating system but a browser and the App Store its crappy search engine. Your app is one of limitless other apps just like your website is one of limitless other websites too. The ones people have heard of are promoted and advertised, or the ones in their own niches.

I don’t know how to make money in the App Store, but if I had to I’d try to learn from financially successful websites. I’d charge a subscription and I’d provide value. I’d make an app that did something other than have a “feed” or a “stream” or “shared moments.” I’d make an app that help people create or understand. I’d try new things.

I couldn’t charge $50 for an “app” because apps are perceived as not having that kind of value which I have to agree with (I know firsthand how much works goes in to making an app, but that doesn’t make the app valuable), so maybe we need to create a new category of software on iOS, one that breaks out of the “app” shell (and maybe breaks out of the moniker, too). I don’t know what that entails, but I’m pretty sure that’s what we need.

The future will be context-sensitive. The future will not be interactive.

Are we preparing for this future? I look around, and see a generation of bright, inventive designers wasting their lives shoehorning obsolete interaction models onto crippled, impotent platforms. I see a generation of engineers wasting their lives mastering the carelessly-designed nuances of these dead-end platforms, and carelessly adding more. I see a generation of users wasting their lives pointing, clicking, dragging, typing, as gigahertz processors spin idly and gigabyte memories remember nothing. I see machines, machines, machines.

In my NSNorth 2013 talk, An Educated Guess (which was recorded on video but as of yet has not been published) I gave a demonstration of a programming tool called Cortex and made the rookie mistake of saying it would be on Github “soon.” Seeing as July 2014 is well past “soon,” I thought I’d explain a bit about Cortex and what has happened since the first demonstration.

Cortex is a framework and environment for application programs to autonomously exchange objects without having to know about each other. This means, a Calendar application can ask the Cortex system for objects with a Calendar type and receive a collection of objects with dates. These Calendar objects come from other Cortex-aware applications on the system, like a Movies app, or a restaurant webpage, or a meeting scheduler. The Calendar application knows absolutely nothing about these other applications, all it knows is it wants Calendar objects.

Cortex can be thought of a little bit like Copy and Paste. With Copy and Paste, the user explicitly copies a selected object (like a selection of text, or an image from a drawing application) and then explicitly pastes what they’ve copied into another application (like an email application). In between the copy and paste is the Pasteboard. Cortex is a lot like the Pasteboard, except the user doesn’t explicitly copy or paste anything. Applications themselves either submit objects or request objects.

This, of course, results in quite a lot of objects in the system, so Cortex also has a method of weighing the objects by relevance so nobody is overwhelmed. Applications can also provide their own ways of massaging the objects in the system to create new objects (for example, a “Romantic Date” plugin might lookup objects of the Movie Showing type and the Restaurant type, and return objects of the Romantic Date type to inquiring applications).

If this sounds familiar, it’s because it was largely inspired by part of a Bret Victor paper with lots of other bits mixed in from my research for the NSNorth talk (especially Bush’s Memex and Engelbart’s NLS)).

Although the system I demonstrated at NSNorth was largely a technical demo, it was nonetheless pretty fully featured and to my delight, was exceptionally well-received by those in the audience. For the rest of the conference, I was approached by many excited developers eager to jump in and get their feet wet. Even those who were skeptical were at least willing to acknowledge, despite its shortcomings, the basic premise of applications sharing data autonomously is a worthwhile endeavour.

And so here I am over a year later with Cortex still locked away in a private repository. I wish I could say I’ve been working on it all this time and it’s ten times more amazing than what I’d originally demoed but that’s just not true. Cortex is largely unchanged and untouched since its original showing last year.

On the plane ride home from NSNorth, I wrote out a to-do list of what needed to be done before I’d feel comfortable releasing Cortex into the wild:

Writing a plugin architecture. The current plan is to have the plugins be normal cocoa plugins which will be run by an XPC process. That way if they crash they won’t bring down the main part of the system. This will mean the generation of objects is done asynchronously, so care will have to be taken here.

A story for debugging Cortex plugins. It’s going to be really tricky to debug these things, and if it’s too hard then people just aren’t going to develop them. So it has to be easy to visualize what’s going on and easy to modify them. This might mean not using straight compiled bundles but instead using something more dynamic. I have to evaluate what that would mean for people distributing their own plugins, if this means they’d always have to be released in source form.

How are Cortex plugins installed? The current thought is to allow for an install message to be sent over the normal cortex protocol (currently http) and either send the plugin that way (from a hosting application) or cause Cortex itself to then download and install the plugin from the web.

How would it handle uninstalls? How would it handle malicious plugins? It seems like the user is going to have to grant permission for these things.

Relatedly, should there be a permissions system for which apps can get/submit which objects for the system. Maybe we want to do just “read and or write” permissions per application.

The most important issue then, and today, is #2. How are you going to make a Cortex component (something that can create or massage objects) without losing your mind? Applications are hard to make, but they’re even harder to make when we can’t see our data. Since Cortexrevolves around data, in order to make anything useful with it, programmers need to be able to see that data. Programmers are smart, but we’re also great at coping with things, with juuuust squeaking by with the smallest amount of functionality. A programmer will build-run-kill-change-repeat an application a thousand times before stopping and taking the time to write a tool to help visualize.

I do no want to promote this kind of development style with Cortex and until I can find a good solution (or be convinced otherwise) I don’t think Cortex would do anything but languish in public. If this sounds like an interesting problem to you, please do get in touch.

“What?!!” you may ask, incredulously, even though you’re reading this on an LCD screen and it can’t possibly respond to you? “How can I possibly ship a bug-free program and thus make enough money to feed my tribe if I don’t test my shiznit?”

The answer is, you can’t. You should test. Test and test and test. But I’ve NEVER, EVER seen a structured test program that a) didn’t take like 100 man-hours of setup time, b) didn’t suck down a ton of engineering resources, and c) actually found any particularly relevant bugs. Unit testing is a great way to pay a bunch of engineers to be bored out of their minds and find not much of anything. [I know – one of my first jobs was writing unit test code for Lighthouse Design, for the now-president of Sun Microsystems.] You’d be MUCH, MUCH better offer hiring beta testers (or, better yet, offering bug bounties to the general public).

Let me be blunt: YOU NEED TO TEST YOUR DAMN PROGRAM. Run it. Use it. Try odd things. Whack keys. Add too many items. Paste in a 2MB text file. FIND OUT HOW IT FAILS. I’M YELLING BECAUSE THIS SHIT IS IMPORTANT.

Most programmers don’t know how to test their own stuff, and so when they approach testing they approach it using their programming minds: “Oh, if I just write a program to do the testing for me, it’ll save me tons of time and effort.”

There’s only three major flaws with this: (1) Essentially, to write a program that fully tests your program, you need to encapsulate all of your functionality in the test program, which means you’re writing ALL THE CODE you wrote for the original program plus some more test stuff, (2) YOUR PROGRAM IS NOT GOING TO BE USED BY OTHER PROGRAMS, it’s going to be used by people, and (3) It’s actually provably impossible to test your program with every conceivable type of input programmatically, but if you test by hand you can change the input in ways that you, the programmer, know might be prone to error.

Sing it.

Doomed to Repeat It. A mostly great article by Paul Ford about the recycling of ideas in our industry:

Did you ever notice, wrote my friend Finn Smith via chat, how often we (meaning programmers) reinvent the same applications? We came up with a quick list: Email, Todo lists, blogging tools, and others. Do you mind if I write this up for Medium?

I think the overall premise is good but I do have thoughts on some of it. First, he claims:

[…] Doug Engelbart’s NLS system of 1968, which pioneered a ton of things—collaborative software, hypertext, the mouse—but deep, deep down was a to-do list manager.

This is a gross misinterpretation of NLS and of Engelbart’s motivations. While the project did birth some “productivity” tools, it was much more a system for collaboration and about Augmenting Human Intellect. A computer scientist not understanding Engelbart’s work would be like a physicist not understanding Isaac Newton’s work.

On to-do lists, I think he gets closest to the real heart of what’s going on (emphasis mine):

The implications of a to-do list are very similar to the implications of software development. A task can be broken into a sequence, each of those items can be executed in turn. Maybe programmers love to do to-do lists because to-do lists are like programs.

I think this is exactly it. This is “the medium is the message” 101. Of course programmers are going to like sequential lists of instructions, it’s what they work in all day long! (Exercise for the reader: what part of a programmer’s job is like email?)

His conclusion is OK but I think misses the bigger cause:

Very little feels as good as organizing all of your latent tasks into a hierarchical lists with checkboxes associated. Doing the work, responding to the emails—these all suck. But organizing it is sweet anticipatory pleasure.

Working is hard, but thinking about working is pretty fun. The result is the software industry.

The real problem is in those very last words, software industry. That’s what we do, we’re an industry but we pretend to be, or at least expect, a field [of computer science]. Like Alan Kay says, computing isn’t really a field but a pop culture.

It’s not that email is broken or productivity tools all suck; it’s just that culture changes. People make email clients or to-do list apps in the same way that theater companies perform Shakespeare plays in modern dress. “Email” is our Hamlet. “To-do apps” are our Tempest.

Culture changes but mostly grows with the past, whereas pop culture takes almost nothing from the past and instead demands the present. Hamlet survives in our culture by being repeatedly performed, but more importantly it survives in our culture because it is studied as a work of art. The word “literacy” doesn’t just mean reading and writing, it also implies having a body of work included and studied by a culture.

Email and to-do apps aren’t cultural in this sense because they aren’t treated by anyone as “great works,” they aren’t revered or built-upon. They are regurgitated from one generation to the next without actually being studied and improved upon. Is it any wonder mail apps of yesterday look so much like those of today?

Don’t, under any circumstances work for less than market rate in order to build other peoples fortunes. Simply don’t do it. Cool product that excites you so in-turn you’ll work for a fraction of the market rate? Call that crap out for what it is. A CEO of a company asking you to help build his fortune while at the same time returning you squat.

I know that using a string constant is the accepted best practice. And yet it still bugs me a little bit, since it’s an extra level of indirection when I’m writing and reading code. It’s harder to validate correctness when I have to look up each value — it’s easier when I can see with my eyes that the strings are correct.[…]

But I’m exceptional at spotting typos. And I almost never have cause to change the value of a key. (And if I did, it’s not like it’s difficult. Project search works.)

I’m not going to judge Brent here on his solution, but it seems to me like this problem would be much better solved by using string constants provided Xcode actually showed you the damn values of those constants in auto-complete.

When developers resort to crappy hacks like this, it’s a sign of a deficiency in the tools. If you find yourself doing something like this, you shouldn’t resort to tricks, you should say “I know a computer can do this for me” and you should demand it. (rdar://17668209)

I recently stumbled across an interesting 2004 project called Glancing, whose basic principle is that of replicating the subtle social cues of personal, IRL office relationships like eye contact, nodding, etc. but for people using computers not in the same physical location.

The basic gist (as I understand it) is people, when in person, don’t merely start talking to one another but first have an initial conversation through body language. We glance at each other and try to make eye contact before actually speaking, hoping for the glance to be reciprocated. In this way, we can determine whether or not we should even proceed with the conversation at all, or if maybe the other person is occupied. Matt Webb’s Glancing exists as a way to bridge that gap with a computer (read through his slide notes, they’re detailed but aren’t long). You can look up at your screen and see who else has recently “looked up” too.

Remote work is a tricky problem to solve. We do it occasionally at Hopscotch when working from home, and we’re mostly successful at it, but as a friend of mine recently put it, it’s harder to have a sense of play when experimenting with new features. There is an element of collaboration, of jamming together (in the musical sense) that’s lacking when working over a computer.

Maybe there isn’t really a solution to it and we’re all looking at it the wrong way. Telecommuting has been a topic of research and experimentation for decades and it’s never fully taken off. It’s possible, like Neil Postman suggests in Technopoly that ours is a generation that can’t think of a solution to a problem outside of technology and that maybe this kind of collaboration isn’t compatible with technology. I see that as a possibility.

But I also think there’s a remote chance we’re trying to graft on collaboration as an after-the-fact feature to non-collaborative work environments. I work in Xcode and our designer works in Sketch, and when we collaborate, neither of our respective apps are really much involved. Both apps are designed with a single user in mind. Contrast this with Doug Engelbart and SRI’s NLS system, built from the ground up with multi-person collaboration in mind, and you’ll start to see what I mean.

NLS’s collaboration features seem, in today’s world at least, like screen sharing with multiple cursors. But it extends beyond that, because the whole system was designed to support multiple people using it from the get-go.

“We believe that a free and open Internet can bring about a better world,” write the authors of the Declaration of Internet Freedom. Its supporters rise up to decry the supposedly imminent demise of this Internet thanks to FCC policies poised to damage Network Neutrality, the notion of common carriage applied to data networks.

Its zealots paint digital Guernicas, lamenting any change in communication policy as atrocity. “If we all want to protect universal access to the communications networks that we all depend on to connect with ideas, information, and each other,” write the admins of Reddit in a blog post patriotically entitled Only YOU Can Protect Net Neutrality, “then we must stand up for our rights to connect and communicate.”

[…]

What is the Internet? As Evgeny Morozov argues, it may not exist except as a rhetorical gimmick. But if it does, it’s as much a thing we do as it is an infrastructure through which to do it. And that thing we do that is the Internet, it’s pockmarked with mortal regret:

You boot a browser and it loads the Yahoo! homepage because that’s what it’s done for fifteen years. You blink at it and type a search term into the Google search field in the chrome of the browser window instead.

Sitting in front of the television, you grasp your iPhone tight in your hand instead of your knitting or your whiskey or your rosary or your lover.

The shame of expecting an immediate reply to a text or a Gchat message after just having failed to provide one. The narcissism of urgency.

The pull-snap of a timeline update on a smartphone screen, the spin of its rotary gauge. The feeling of relief at the surge of new data—in Gmail, in Twitter, in Instagram, it doesn’t matter.

The gentle settling of disappointment that follows, like a down duvet sighing into the freshly made bed. This moment is just like the last, and the next.

You close Facebook and then open a new browser tab, in which you immediately navigate back to Facebook without thinking.

The web is a brittle place, corrupted by advertising and tracking (see also “Is the Web Really Free?”). I won’t spoil the ending but I’m at least willing to agree with his conclusion.

But the story I really want to tell is not about test scores. It is not even about the math/Logo class. (3) It is about the art room I used to pass on the way. For a while, I dropped in periodically to watch students working on soap sculptures and mused about ways in which this was not like a math class. In the math class students are generally given little problems which they solve or don’t solve pretty well on the fly. In this particular art class they were all carving soap, but what each students carved came from wherever fancy is bred and the project was not done and dropped but continued for many weeks. It allowed time to think, to dream, to gaze, to get a new idea and try it and drop it or persist, time to talk, to see other people’s work and their reaction to yours–not unlike mathematics as it is for the mathematician, but quite unlike math as it is in junior high school. I remember craving some of the students’ work and learning that their art teacher and their families had first choice. I was struck by an incongruous image of the teacher in a regular math class pining to own the products of his students’ work! An ambition was born: I want junior high school math class to be like that. I didn’t know exactly what “that” meant but I knew I wanted it. I didn’t even know what to call the idea. For a long time it existed in my head as “soap-sculpture math.”

It’s beginning to seem to me like constructionist learning is great, but also that we need many different approaches to learning, like atoms oscillating, so that the harmonics of learning can better emerge.

They were using this high-tech and actively computational material as an expressive medium; the content came from their imaginations as freely as what the others expressed in soap. But where a knife was used to shape the soap, mathematics was used here to shape the behavior of the snake and physics to figure out its structure. Fantasy and science and math were coming together, uneasily still, but pointing a way. LEGO/Logo is limited as a build-an-animal-kit; versions under development in our lab will have little computers to put inside the snake and perhaps linear activators which will be more like muscles in their mode of action. Some members of our group have other ideas: Rather than using a tiny computer, using even tinier logic gates and motors with gears may be fine. Well, we have to explore these routes (4). But what is important is the vision being pursued and the questions being asked. Which approach best melds science and fantasy? Which favors dreams and visions and sets off trains of good scientific and mathematical ideas?

I think the biggest problem still faced by Logo is (like Smalltalk) its success. Logo is highly revered as an educational language, so much so that its methods are generally accepted as “good enough” and not readily challenged. The unfortunate truth is twofold:

In order for Logo to be successful as a general creative medium for learning, there are many other factors which must also be worked on, such as teacher/school acceptance (this is of course no easy feat and no fault of Logo’s designers, it’s just an unfortunate truth. Papert discusses it somewhat in The Children’s Machine).

Logo just hasn’t taken the world by storm. Obviously these things take time, but the implicit assumption seems to be “Logo is done, now the world needs to catch up to it.”

“Good enough” tends to lead us down paths prematurely, when instead we should be pushing further. That’s why most programming languages look like Smalltalk and C. Those languages worked marvelously for their original goals, but they’re far from being the pinnacle of possibility. If Logo were invented today, what could it look like today (*future-referencing an ironic project of mine*)?

Computer-aided instruction may seem to refer to method rather than content, but what counts as a change in method depends on what one sees as the essential features of the existing methods. From my perspective, CAI amplifies the rote and authoritarian character that many critics see as manifestations of what is most characteristic of–and most wrong with–traditional school. Computer literacy and CAI, or indeed the use of word-processors, could conceivably set up waves that will change school, but in themselves they constitute very local innovations–fairly described as placing computers in a possibly improved but essentially unchanged school. The presence of computers begins to go beyond first impact when it alters the nature of the learning process; for example, if it shifts the balance between transfer of knowledge to students (whether via book, teacher, or tutorial program is essentially irrelevant) and the production of knowledge by students. It will have really gone beyond it if computers play a part in mediating a change in the criteria that govern what kinds of knowledge are valued in education.

This is perhaps the most damning and troublesome facet of computers for their use in pushing humans forward. Computers are so good at simulating old media that it’s essentially all we do with them. Doing old media is easy, as we don’t have to learn any new skills. We’ve evolved to go with the familiar, but I think it’s time we dip our toes into something a little beyond.

What I find troubling, however, is the notion that this sort of technology should be used to mimic the wrong things:

But what really interests the Tangible Media Group is the transformable UIs of the future. As the world increasingly embraces touch screens, the pullable knobs, twisting dials, and pushable buttons that defined the interfaces of the past have become digital ghosts.

Buttons and knobs! Have we learned nothing from our time with dynamic visuals? Graphical buttons and other “controls” on a computer screen already act like some kind of steampunk interface. We’ve got buttons and sliders and knobs and levers, most of which are not appropriate from computer tasks but which we do because we’re stuck in a mechanical mindset. If we’re lucky enough to be blessed with a dynamic physical interface, why should we similarly squander it?

Hands are super sensitive and super expressive (read John Napier’s book about them and think about how you hold it as you read). They can form powerful or gentle grips and they can switch between them almost instantly. They can manipulate and sense pressure, texture, and temperature. They can write novels and play symphonies and make tacos. Why would we want our dynamic physical medium to focus on anything less?

In addition to allowing for two iPad apps to be used at the same time, the feature is designed to allow for apps to more easily interact, according to the sources. For example, a user may be able to drag content, such as text, video, or images, from one app to another. Apple is said to be developing capabilities for developers to be able to design their apps to interact with each other. This functionality may mean that Apple is finally ready to enable “XPC” support in iOS (or improved inter-app communication), which means that developers could design App Store apps that could share content.

Although I have no sources of my own, I wouldn’t bet against Mark Gurman for having good intel on this. It seems likely that this is real, but I think it might end up being a misunderstanding of problems users are actually trying to solve.

It’s pretty well-known most users have struggled with the “windowed-applications” interface paradigm, where there can be multiple, overlapping windows on screen at once. Many users get lost in the windows and end up devoting too much time to managing the windows than actually getting to work. So iOS is mostly a pretty great step forward in this regard. Having two “windows” of apps open at once would be a step back to the difficulties found on the desktop. And just because the windows on iOS 8 might not overlap, there’s still two different apps to multitask with — something else pretty well known to cause strife in people.

Having multiple windows seems like a kind of “faster horse,” a way to just repurpose the “old way” of doing something instead of trying to actually solve the problem users are having. In this case, the whole impetus for showing multiple windows or “dragging and dropping between apps” is to share information between applications.

Users writing an email might want details from a website, map, or restaurant app. Users trying to IM somebody might want to share something they’ve just seen or made in another app. Writers might want to refer to links or page contents from a Wikipedia app. These sorts of problems can all be solved by juxtaposing app windows side by side, but to me it seems like a cop-out.

A better solution would be to share the data between applications, through some kind of system service. Instead of drag and drop, or copy and paste (both are essentially the same thing), objects are implicitly shared across the system. If you are looking at a restaurant in one app, then switch to a maps app, that map should show the restaurant (along with any other object you’ve recently seen with a location). When you head to your calendar, it should show potential mealtimes (with the contact you’re emailing with, of course).

This sort of “interaction” requires thinking about the problem a little differently, but it’s advantageous because it ends up skipping most of the interaction users actually have to do in the first place. Users don’t need to drag and drop, they don’t need to copy and paste, and they don’t need to manage windows. They don’t need to be overloaded with information of seeing too many apps on screen at once.

I’ve previously talked about this, and my work on this problem is largely inspired by a section in Magic Ink. It’s sort of a “take an object; leave an object” kind of system, where applications can send objects to the system service, and others can request objects from the system (and of course, applications can provide feedback as to which objects should be shown and which should be ignored).

I don’t expect Apple to do this in iOS 8, but I do hope somebody will consider it.

Legible Mathematics. Absolutely stunning and thought-provoking essay on a new interface for math as a method of experimenting with new interfaces for programming.

The essential premise of the book, which Postman extends to the rest of his argument(s), is that “form excludes the content,” that is, a particular medium can only sustain a particular level of ideas. Thus Rational argument, integral to print typography, is militated against by the medium of television for the aforesaid reason. Owing to this shortcoming, politics and religion are diluted, and “news of the day” becomes a packaged commodity. Television de-emphasises the quality of information in favour of satisfying the far-reaching needs of entertainment, by which information is encumbered and to which it is subordinate.

America was formed as, and made possible by, a literate society, a society of readers, when presidential debates took five hours. But television (and other electronic media) erode many of the modes in which we (i.e., the world, not just America) think.

If you work in media (and software developers, software is very much a medium) then you have a responsibility to read and understand this book. Your local library should have a copy, too.

As we enjoy the Net’s bounties, are we sacrificing our ability to read and think deeply?

Now, Carr expands his argument into the most compelling exploration of the Internet’s intellectual and cultural consequences yet published. As he describes how human thought has been shaped through the centuries by “tools of the mind”—from the alphabet to maps, to the printing press, the clock, and the computer—Carr interweaves a fascinating account of recent discoveries in neuroscience by such pioneers as Michael Merzenich and Eric Kandel. Our brains, the historical and scientific evidence reveals, change in response to our experiences. The technologies we use to find, store, and share information can literally reroute our neural pathways.

It’s a well-researched book about how the computers — and the internet in general — physically alter our brains and cause us to think differently. In this case, we think more shallowly because we’re continuously zipping around links and websites, and we can’t focus as well as we could when we were a more literate society. Deep reading goes out the browser window, as it were.

In other words, people don’t seem to stay or at least willing to explore more when they arrive on a blog they probably never saw before. I’m surprised, and not because I’m so vain to think I’m that charismatic as to retain 90% of new visitors, but by the general lack of curiosity. I can understand that not all the people who followed MacStories’ link to my site had to like it or agree with me. What I don’t understand is the behaviour of who liked what they saw. Why not return, why not decide to keep an eye on my site?

I’ve thought a lot about this sort of thing basically the whole time I’ve been running Speed Of Light (just over four years now, FYI) and although I don’t consider myself to be any kind of great writer, I’ve always been a little surprised by the lack of traffic the site gets, even after some articles getting linked from major publications.

On any given day, a typical reader of my site will probably see a ton of links from Twitter, Facebook, an RSS feed, or a link site they read. Even if the content on any of those websites is amazing, a reader probably isn’t going to spend too much time hanging around, because there are forty or fifty other links for them to see today.

This is why nobody sticks around. This is why readers bounce. It’s why we have shorter, more superficial articles instead of deep essays. It’s why we have tl;dr. The torrent of links becomes a torment of links because we won’t and can’t stay on one thing for too long.

And it also poses moral issues for writers (or for me, at least). I know there’s a deluge, and every single thing I publish on this website contributes to that. But the catch is the way to get more avid readers these days is to publish copiously. The more you publish, the more people read, the more links you get, the more people become subscribers. What are we to do?

I don’t have a huge number of readers, but those who do read the site I respect tremendously. I’d rather have fewer, but more thoughtful readers who really care about what I write, than more readers who visit because I post frequent-but-lower-quality articles. I’d rather write long form, well-researched, thoughtful essays than entertaining posts. I know most won’t sit through more than three paragraphs but those aren’t the readers I’m after, anyway.

Again, here’s this urge to find the iPad some specific purpose, some thing it can do better than this device category or that other device category otherwise it’ll fade away.

If we want the iPad to be better at something, the answer is in the software, of course. Software truly optimised for the iPad. Software truly specialised for the iPad.

What I wonder is where are all the apps you spend at least one whole hour doing the same thing (other than “consuming” like you would in Safari, Netflix, or Twitter. I mean something real). Obviously I think Hopscotch is a candidate, but what else?

We need apps daring enough to be measured beyond “minutes of engagement” and we need developers daring enough to build them.

Almost every American I know does trade large portions of his life for entertainment, hour by weeknight hour, binge by Saturday binge, Facebook check by Facebook check. I’m one of them. In the course of writing this I’ve watched all 13 episodes of House of Cards and who knows how many more West Wing episodes, and I’ve spent any number of blurred hours falling down internet rabbit holes. All instead of reading, or writing, or working, or spending real time with people I love.

Whenever anybody brings up the subject of creating software in a graphical environment, Smalltalk inevitably comes up. Since I’ve been publishinglotslately about such environments, I’ve been hearing lots of talk about Smalltalk, too. The most common response I hear is something along the lines of

You want a graphical environment? Well kid, we tried that with Smalltalk years ago and it failed, so it’s hopeless.

Outside of some select financial markets, Smalltalk is not used much for professional software development, but Smalltalk didn’t fail. In order to fail, a technology must attempt, but remain unsuccessful at achieving its goals. But when developers grunt that “Smalltalk failed”, they are saying, unaware of it themselves, that Smalltalk has failed for their goals. The goal of Smalltalk, as we’ll see, wasn’t really so much a goal as it was a vision, one that is still successfully being followed to this day.

There is a failure

But the failure is that of the software development community at large to do their research and to understand technologies through the lens of their creators instead trying to look at history in today’s terms.

The common gripes against Smalltalk are that it’s an image-based environment, which doesn’t abide well to source control management, and that these images are large and cumbersome for distribution and sharing. It’s true, a large image-based memory dump doesn’t work too well with Git, and on the whole Smalltalk doesn’t fit too well with our professional software development norms.

But it should be plain to anyone who’s done even the slightest amount of research on the topic that Smalltalk was never intended to be a professional software development environment. For a brief history, see Alan Kay’s Early History of Smalltalk, John Markoff’s What the Dormouse Said or Michael Hiltzik’s Dealers of Lightning. Although Xerox may have attempted to push Smalltalk as a professional tool after the release of Smalltalk 80, it’s obvious from the literature this was not the original intent of Smalltalk’s creators in its formative years during PARC.

A Personal Computer for Children of All Ages

The genesis of Smalltalk, its raison d’être, was to be the software running on Alan Kay’s Dynabook vision. In short, Alan saw the computer as a personal, dynamic medium for learning, creativity, and expression, and created the concept of the Dynabook to pursue that vision. He knew the ways the printing press and literacy revolutionized the modern world, and imagined what a book would be like if it had all the brilliance of a dynamic computer behind it.

Smalltalk was not then designed as a way for professional software development to take place, but instead as a general purpose environment in which “children of all ages” could think and reason in a dynamic environment. Smalltalk never started out with an explicit goal, but was instead a vehicle to what’s next on the way to the Dynabook vision.

In this regard, Smalltalk was quite successful. As a general professional development environment, Smalltalk is not the best, but as a language designed to create a dynamic medium of expression, Smalltalk was and is highly successful. See Alan give a demo of a modern, Smalltalk-based system for an idea how simple it is for a child to work with powerful and meaningful tools.

The Vehicle

Smalltalk and its descendants are far from perfect. They represent but one lineage of tools created with the Dynabook vision in mind, but they of course do not have to be the final say in expressive, dynamic media for a new generation. But whether you’re chasing that vision or just trying to understand Smalltalk as a development tool, it’s crucial to not look at it as how it fails at your job, but how your job isn’t what it’s trying to achieve in the first place.

This is nowhere more evident than in the world of the mobile app. Any one app comprises a very small number of very focussed, very easy to use features. This has a couple of different effects. One is that my phone as a whole is an incredibly broad, incredibly shallow experience.

I think Graham is very right here (and it’s not just limited to mobile, either, but it’s definitely most obvious there). It’s so hard to make software that actually, truly, does something useful for a person, to help them understand and make decisions, because we have to focus so much on the lowest common denominator.

We see those awesome numbers of how many iOS devices there are in the wild, and we think “If I could just get N% of those users, I’d have a tone of money” and it’s true, but it means you’ve also got to appeal to a huge population of users. You have to go broad instead of deep. The amount of time someone spends in your software is often measured in seconds. How do you do much of anything meaningful in seconds? 140 characters? Six seconds of video?

And with an audience so broad and an application so generic, you can’t expect to charge very much for it. This is why anything beyond $1.99 is unthinkable in the App Store (most users won’t pay anything at all).

What would a programming tool suitable for experts (or the proficient) look like? Do we have any? Alan Kay is fond of saying that we’re stuck with novice-friendly user experiences, that don’t permit learning or acquiring expertise:

There is the desire of a consumer society to have no learning curves. This tends to result in very dumbed-down products that are easy to get started on, but are generally worthless and/or debilitating. We can contrast this with technologies that do have learning curves, but pay off well and allow users to become experts (for example, musical instruments, writing, bicycles, etc. and to a lesser extent automobiles).

Perhaps, while you could never argue that common programming languages don’t have learning curves, they are still “generally worthless and/or debilitating”. Perhaps it’s true that expertise at programming means expertise at jumping through the hoops presented by the programming language, not expertise at telling a computer how to solve a problem in the real world.

I wouldn’t argue that about programming languages. Aside from languages which are purposefully limited in scope or in target (Logo and Hopscotch come to mind), I think most programming languages aren’t tremendously different in terms of their syntax or capability.

Compare Scheme with Java. Although Java does have more syntax than Scheme, it’s not really that much more in the grand scheme (sorry) of things. Where languages really do differ in power is in libraries, but then that’s really just a story of “Who’s done the work, me or the vendor?”

I don’t think languages need the kitchen sink, but I do think languages need to be able to build the kitchen sink.

On Monday, I published an essay exploring some thoughts about a replacement for Objective C, how to really suss out what I think would benefit software developers the most, and how we could go about implementing that. Gingerly though I pranced around certain details, and implore though I did for developers not to get caught up on certain details, alas many were snagged on some of the lesser important parts of the essay. So, I’d like to, briefly if I may, attempt to clear some of those up.

What We Make

If there was one underlying theme of the essay, it was “Don’t be trapped by Instrumental Thinking”, that particularly insidious kind of thinking that plagues us all (myself included) to thinking about new ideas or technologies only in terms of what we’re currently doing. That is, we often can only see or ask for a new tool to benefit exactly the same job we’re currently doing, where instead we should consider new kinds of things it might enable us to do.

Word processors are a prime example of this. When the personal computer revolution began, it was aided largely by the word processor — essentially a way to automatically typeset your document. The document — the content of what you produced — was otherwise identical, but the word processor made your job of typesetting much easier.

Spreadsheets, on the other hand, were something essentially brand new that emerged from the computer. Instead of just doing an old analog task, but better (as was the case with the word processor), spreadsheets allowed users to do something they just couldn’t do otherwise without the computer.

The important lesson of the spreadsheet, the one I’m trying to get at, is that it got to the heart of what people in business wanted to do: it was a truly new, flexible, and better way to approach data, like finances, sales, and other numbers. It wasn’t just paper with the kinks worked out, it wasn’t just a faster horse, it was a real, new thing that solved their problems in better ways.

When talking in terms of Objective C development, I don’t mean “I’m dreaming of a replacement that’ll just let you create the exact same identical apps, it’ll just have fewer warts,” but I instead mean I’m dreaming of a new, fundamental way to approach building software, that will result in apps richer in the things we care about, like visual and graphic design, usability and interaction, polish, and yes, offer enhancements to the undercarriage, too.

It’s far from being just about a pretty interface, it’s about rethinking what we’re even trying to accomplish. We’re trying to make software that’s understandable, that’s powerful, that’s useful, and that will benefit both our customers and ourselves. And while I think we might eventually get there if we keep trotting along as we’re currently doing, I think we’re also capable of leaping forward. All it takes is some imagination and maybe a pinch of willingness.

Graphical Programming

When “graphical programming” is brought up around programmers the lament is palpable. To most, graphical programming translates literally into “pretty boxes with lines connecting them” something akin to UML, where the “graphical” part of programming is actually just a way for the graphics to represent code (but please do see GRAIL or here, a graphical diagramming tool designed in the late 1960s which still spanks the pants off most graphical coding tools today). This is not what I consider graphically programming to be. This is, at best, graphical coding, to which I palpably lament in agreement.

When I mention “graphical programming” I mean creating a graphical program (e.g., a view with colours and text) in a graphical way, like drawing out rectangles, lines, and text as you might do in Illustrator (see this by Bret Victor (I know, didn’t expect me to link to him right?) for probably my inspiration for this). When most people hear graphical programming, they think drawing abstract boxes (that probably generate code, yikes), but what I’m talking about is drawing the actual interface, as concretely as possible (and then abstracting the interface for new data).

There are loads of crappy attempts at the former, and very few attempts at all at the latter. There’s a whole world waiting to be attempted.

Interface Builder

Interface Builder is such an attempt at drawing out your actual, honest to science, interface in a graphical way, and it’s been moderately successful, but I think the tool falls tremendously short. Your interfaces unfortunately end up conceptually the same as mockups (“How do I see this with my real application data? How do I interact with it? How come I can only change certain view properties, but not others, without resorting to code?”). These deficiencies arise because IB is a graphical tool in a code-first town. Although it abides, IB is a second-class citizen so far as development tools go. Checkboxes for interface features get added essentially at Apple’s whim.

What we need is a tool where designing an interface means everything interface related must be a first-class citizen.

Compiled vs Interpreted

Oh my goodness do I really have to go there? After bleating for so many paragraphs about considering thinking beyond precisely what must be worked on right-here-and-now, so many get caught up on the Compiled-v-Interpreted part.

Just to be clear, I understand the following (and certainly, much more):

Compiled applications execute faster than interpreted ones.

Depending on the size of the runtime or VM, an interpreted language consumes more memory and energy than a compiled language.

Some languages (like Java) are actually compiled to a kind of bytecode, which is then executed in a VM (fun fact: I used to work on the JVM at IBM as a co-op student).

All that being said, I stand by my original assertion that for the vast majority of the kinds of software most of us in the Objective C developer community build, the differences between the two styles of language in terms of performance are negligible, not in terms of absolute difference, but in terms of what’s perceptible to users. And that will only improve over time, as phones, tablets, and desktop computers all amaze our future selves by how handily they run circles around what our today selves even imagined possible.

If I leave you with nothing else, please just humour me about all this.

Apple is a product-driven company, not a computing-driven company. While there are certainly many employees interested in the future of computing, Apple isn’t the company to drive [replacing it]. It’s hard to convince such a highly successful product company to operate otherwise.

So if a successor language is to emerge, it’s got to come from elsewhere. […] I’m not convinced a better Objective C, some kind of Objective Next is the right way to go. A new old thing is not really what we need. It seems absurd that 30 years after the Mac we still build the same applications the same ways. It seems absurd we still haven’t really caught up to Smalltalk. It seems absurd beautiful graphical applications are created solely and sorely in textual, coded languages. And it seems absurd to rely on one vendor to do something about it.

There has been lots of talk in the weeks since I posted my article criticizing Objective C, including a post by my friend Ash Furrow, Steve Streza, Guy English, and Brent Simmons. Much of their criticisms are similar or ring true to mine, but the suggestions for fixing the ills of Objective C almost all miss the point:

We don’t need a better Objective C; we need a better way to make software. We do that in two steps: figure out what we’re trying to accomplish, and then figure out how to accomplish it. It’s simple, but nearly every post about replacing Objective C completely ignores these two steps.

I work on programming languages professionally at Hopscotch, which I mention not so I can brag about it but so I can explain this is a subject I care deeply about, something I work on every day. This isn’t just a cursory glance because I’ve had some grumbles with the language. This essay is my way of critically examining and exploring possibilities for future development environments we can all benefit from. That requires stepping a bit beyond what most Objective C developers seem willing to consider, but it’s important nonetheless.

These are all really very nice and good, but they’re actually putting the CPU before the horse. If you ask most developers why they want any of those things, they’ll likely tell you it’s because those are the rough spots of Objective C as it exists today. But they’ll say nothing of what they’re actually trying to accomplish with the language (hat tip to Guy English though for being the exception here).

This kind of thinking is what Alan Kay refers to as “Instrumental thinking”, where you only think of new inventions in terms of how they can allow you to do your same precise job in a better way. Personal computing software has fallen victim to instrumental thinking routinely since its inception. A word processor’s sole function is to help you layout your page better, but it does nothing to help your writing (orthography is a technicality).

Such is the same with the thinking that goes around with replacing Objective C. Almost all the wishlists for replacements simply ask for wrinkles to be ironed out.

If you’re wondering what such a sandpapered Objective Next might look like, I’ll point you to one I drew up in early 2013 (while I too was stuck in the instrumental thinking trap, I’ll admit).

It’s easy to get excited about the (non-existing) language if you’re an Objective C programmer, but I’m imploring the Objective C community to try and think beyond a “new old-thing”, try to actually think of something that solves the bigger pictures.

When thinking about what could really replace Objective C, then, it’s crucial to clear your mind of the minutia and dirt involved in how you program today, and try to think exclusively of what you’re trying to accomplish.

For most Objective C developers, we’re trying to make high quality software, that looks and feels great to use. We’re looking to have a tremendous amount of delight and polish to our products. And hopefully, most importantly, we’re trying to build software to significantly improve the lives of people.

That’s what we want to do. That’s what we want to do better. The problem isn’t about whether or not our programming language has garbage collection, the problem is whether or not we can build higher quality software in a new environment than we could with Objective C’s code-wait-run cycle.

In the Objective C community, “high quality software” usually translates to visually beautiful and fantastically useable interfaces. We care a tremendous amount about how our applications are presented and understood by our users, and this kind of quality takes a mountain of effort to accomplish. Our software is usually developed by a team of programmers and a team of designers, working in syncopation to deliver on the high quality standards we’ve set for ourselves. More often than not, the programers become the bottleneck, if only because every other part of the development team must ultimately have their work funnelled through code at some point. This causes long feedback loops in development time, and if it’s frustrating to make and compare improvements to the design, it is often forgone altogether.

This strain trickles down to the rest of the development process. If it’s difficult to experiment, then it’s difficult to imagine new possibilities for what your software could do. This, in part, reinforces our instrumental thinking, because it’s usually just too painful to try and think outside the box. We’d never be able to validate our outside box thinking even if we wanted to! And thus, this too strains our ability to build software that significantly enhances the lives of our customers and users.

With whatever Objective C replacement there may be, whether we demand it or we build it ourselves, isn’t it best to think not how to improve Objective C but instead how make the interaction between programmer and designer more cohesive? Or how to shift some of the power (and responsibility) of the computer out of the hands of the programmer and into the arms of the designer?

Something as simple as changing the colour or size of text should not be the job of the programmer, not because the programmer is lazy (which is mostly certainly probably true anyway) but because these are issues of graphic design, of presentation, which the designer is surely better trained and more equipped to handle. Yet this sort of operation is almost agonizing from a development perspective. It’s not that making these changes are hard, but that it often requires the programmer to switch tasks, when and only when there is time, and then present changes to the designer for approval. This is one loose feedback loop and there’s no real good reason why it has to be this way. It might work out pretty well the other way.

Can you think of any companies where design is paramount?

When you’re thinking of a replacement for Objective C, remember to think of why you want it in the first place. Think about how we can make software in better ways. Think about how your designers can improve your software if they had more access to it, or how you could improve things if you could only see them.

This is not just about a more graphical environment, and it’s not just about designers playing a bigger role. It’s about trying to seek out what makes software great, and how our development tools could enable software to be better.

How do we do it?

If we’re going to build a replacement programming environment for Objective C, what’s it going to be made of? Most compiled languages can be built with LLVM quite handily these days—

STOP

We’ve absolutely got to stop and check more of our assumptions first. Why do we assume we need a compiled language? Why not an interpreted language? Objective C developers are pretty used to this concept, and most developers will assert compiled languages are faster than interpreted or virtual machine language (“just look at how slow Android is, this is because it runs Java and Java runs in a VM,” we say). It’s true that compiled apps are almost always going to be faster than interpreted apps, but the difference isn’t substantial enough to close the door on them so firmly, so quickly. Remember, today’s iPhones are just-as-fast-if-not-faster than a pro desktop computer ten years ago, and those ran interpreted apps just fine. While you may be able to point at stats and show me that compiled apps are faster, in practice the differences are often negligible, especially with smart programmers doing the implementation. So lets keep the door open on interpreted languages.

Whether compiled or interpreted, if you’re going to make a programming language then you definitely need to define a grammar, work on a parser, and—

STOP

Again, we’ve got to stop and check another assumption. Why make the assumption that our programming environment of the future must be textual? Lines of pure code, stored in pure text files, compiled or interpreted, it makes little difference. Is that the future we wish to inherit?

We presume code whenever we think programming, probably because it’s all most of us are ever exposed to. We don’t even consider the possibility that we could create programs without typing in code. But with all the abundance and diversity of software, both graphical and not, should it really seem so strange that software itself might be created in a medium other than code?

“Ah, but we’ve tried that and it sucked,” you’ll say. For every sucky coded programming language, there’s probably a sucky graphical programming language too. “We’ve tried UML and we’ve tried Smalltalk,” you’ll say, and I’ll say “Yes we did, 40 years of research and a hundred orders of magnitude ago, we tried, and the programming community at large decided it was a capital Bad Idea.” But as much as times change, computers change more. We live in an era of unprecedented computing power, with rich (for a computer) displays, ubiquitous high speed networking, and decades of research.

For some recent examples of graphical programming environments that actually work, see Bret Victor’s Stop Drawing Dead Fish and Drawing Dynamic Visualizations talks, or Toby Schachman’s (of Recursive Drawing fame) excellent talk on opening up programming to new demographics by going visual. I’m not saying any one of these tools, as is, are a replacement for Objective C, but I am saying these tools demonstrate what’s possible when we open our eyes, if only the tiniest smidge, and try to see what software development might look like beyond coded programming languages.

And of course, just because we should seriously consider non-code-centric languages doesn’t mean that everything must be graphical either. There are of course concepts we can represent linguistically which we can’t map or model graphically, so to completely eschew a linguistic interface to program creation would be just as absurd as completely eschewing a graphical interface to program creation in a coded language.

The benefits for even the linguistic parts of a graphical programming environment are plentiful. Consider the rich typographic language we forego when we code in plain text files. We lose the benefits of type choices, of font sizes and weight, hierarchical groupings. Even without any pictures, think how much more typographically rich a newspaper is compared to a plain text program file. In code, we’re relegated to fixed-width, same size and weight fonts. We’re lucky if we get any semblance of context from syntax highlighting, and it’s often a battle to impel programmers to use whitespace to create ersatz typographic hierarchies in code. Without strong typography, nothing looks any more important than anything else as you gawk at a source code file. Experienced code programmers can see what’s important, but they’re not seeing it with their eyes. Why should we, the advanced users of advanced computers, be working in a medium that’s less visually rich than even the first movable type printed books, five centuries old?

And that’s to say nothing of other graphical elements. Would you like to see some of my favourite colours? Here are three: #fed82f, #37deff, #fc222f. Aren’t they lovely? The computer knows how to render those colours better than we know how to read hex, so why doesn’t it do that? Why don’t we demand this of our programming environment?

Objective: Next

If we’re ever going to get a programming environment of the future, we should make sure we get one that’s right for us and our users. We should make sure we’re looking far down the road, not at our shoes. We shouldn’t try to build a faster horse, but we should instead look where we’re really trying to go and then find the best way to get there.

We also shouldn’t rely on one company to get us there. There’s still plenty to be discovered by plenty of people. If you’d like to help me discover it, I’d love to hear from you.

NSNorth 2014. I can’t believe I haven’t yet written about it, but Ottawa’s very own NSNorth is back this year and it’s looking to be better than ever (that’s saying a lot, considering I spoke at the last one!).

But then it hit me. Code is not literature and we are not readers. Rather, interesting pieces of code are specimens and we are naturalists. So instead of trying to pick out a piece of code and reading it and then discussing it like a bunch of Comp Lit. grad students, I think a better model is for one of us to play the role of a 19th century naturalist returning from a trip to some exotic island to present to the local scientific society a discussion of the crazy beetles they found: “Look at the antenna on this monster! They look incredibly ungainly but the male of the species can use these to kill small frogs in whose carcass the females lay their eggs.”

I think it’s true that code is not literature, but I also think it’s kind of a bum steer to approach code like science. We investigate things in science because we have to. Nature has created the world a certain way, and there’s no way to make it understandable but to investigate it.

But code isn’t a natural phenomenon, it’s something made by people, and as such we have the opportunity (and responsibility) to make it accessible without investigation.

If we need to decode something, something that we ourselves make, I think that’s a sign we shouldn’t be encoding it in the first place.

Due to circumstances out of my control, my previously mentioned Understanding Software talk has been pushed back to April 29th. More detailed info will be posted on Meetup closer to the actual date, and I’ll link to it from here.

In a letter received by Speed of Light postmarked February 3rd, 2014, the authors of The Federalist Papers contend Facebook’s latest iPhone app, Paper, should be renamed. The authors, appearing under the pseudonym Publius, write:

It has been frequently remarked, that it seems to have been reserved to the creators of Facebook, by their conduct and example, they might choose to appropriate the name Paper for their own devices. We would like to see that changed.

The authors, predicting the counter-argument that the name “paper” is a common noun, write:

Every story has a name. Despite the fact the word “paper” is indeed a generic term, and despite the fact the original name of our work was simply The Federalist (Papers was later appended by somebody else), we nonetheless feel because our work was published first, we are entitled to the name Paper. The Federalist Papers have been circulating for more than two centuries, so clearly, we have a right to the name.

The polemic towards Facebook seems to be impelled by Facebook’s specific choice of title and location:

It is especially insulting since Facebook has chosen to launch Paper exclusively in America, where each of its citizens is well aware and well versed in the materials of The Federalist Papers. It is as though they believe citizens will be unaware of the source material from which Facebook Paper is inspired. This nation’s citizens are active participants in the nation’s affairs, and this move by Facebook is offensive to the very concept.

Publius provide a simple solution:

We believe it is the right of every citizen of this nation to have creative freedoms and that’s why we kindly ask Facebook to be creative and not use our name.

When most programmers are introduced to Objective C for the first time, they often recoil in some degree of disgust at its syntax: “Square brackets? Colons in statements? Yuck” (this is a close approximation of my first dances with the language). Objective C is a little different, a little other, so naturally programmers are loathe to like it at first glance.

I think that reaction, that claiming Objective C is bad because of its syntax results from two factors which coincided: 1. The syntax looks very alien; 2. Most learned it because they wanted to jump in on iOS development, and Apple more or less said “It’s this syntax or the highway, bub” which put perceptions precariously between a rock and a hard place. Not only did the language taste bad, developers were also forced to eat it regardless.

But any developer with even a modicum of sense will eventually realize the Objective part of Objective C’s syntax is among its greatest assets. Borrowed largely (or shamelessly copied) from Smalltalk’s message sending syntax by Brad Cox (see Object Oriented Programming: An Evolutionary Approach for a detailed look at the design decisions behind Objective C), Objective C’s message sending syntax has some strong benefits over traditional object.method() method calling syntax. It allows for later binding of messages to objects, and perhaps most practically, reading code reads like a sentence, with parameters prefaced with their purpose in the message.

Objective C’s objects are pretty flexible when compared to similar languages like C++ (compare relative fun of extending and overriding parts of classes in Objective C vs C++), and can be extended at runtime via Categories or through runtime functions (more on those soon) itself, but Objective C’s objects pale in comparison to those of a Smalltalk-like environment, where objects are always live and browsable. Though objects can be extended at runtime, they seldom are, and are instead almost exclusively fully built by compile time (that is to say, yes lots of objects are allocated during the runtime of an application, and yes some methods are added via Categories which are loaded in at runtime, but rarely are whole classes added to the application at runtime).

This compiled nature, along with the runtime functions point to the real crux of what’s wrong with Objective C: the language is still feels quite tacked on to C. The language was in fact originally built as a preprocessor to C (c.f.: Cox), but over the years has been built up a little sturdier, all the while remaining still atop C. It’s a superset of C, so all C code is considered Objective C code, which includes:

In addition, Objective C has its own proper warts, including a lack of method visibility methods (like protected, private, partytime, and public), lacks class namespacing (although curiously protocols exist in their own namespace), require method declarations for public methods, lacks a proper importing system (yes, I’m aware of @import), suffers from C’s support of library linking because it lacks its own, has header files, has a weak abstraction for dynamically sending messages (selectors are not much more than strings), must be compiled and re-run to see changes, etc.

John Siracusa has talked at length about what kinds of problems a problemmed language like Objective C can cause in his Copland 2010 essays. In short, Objective C is a liability.

I don’t see Apple dramatically improving or replacing Objective C anytime soon. It doesn’t seem to be in their technical culture, which still largely revolves around C. Apple has routinely added language features (e.g., Blocks, modules) and libraries (e.g., libdispatch) at the C level. Revolving around a language like C makes sense when you consider Apple’s performance-driven culture (“it scrolls like butter because it has to”). iOS, Grand Central Dispatch, Core Animation, WebKit are all written at the C or C++ level where runtime costs are near-non-extant, where higher level concepts like a true object system can’t belong, due to the performance goals the company is ruled by.

Apple is a product-driven company, not a computing-driven company. While there are certainly many employees interested in the future of computing, Apple isn’t the company to drive it. It’s hard to convince such a highly successful product company to operate otherwise.

So if a successor language is to emerge, it’s got to come from elsewhere. I work on programming languages professionally at Hopscotch, but I’m not convinced a better Objective C, some kind of Objective Next is the right way to go. A new old thing is not really what we need. It seems absurd that 30 years after the Mac we still build the same applications the same ways. It seems absurd we still haven’t really caught up to Smalltalk. It seems absurd beautiful graphical applications are created solely and sorely in textual, coded languages. And it seems absurd to rely on one vendor to do something about it.

Many of us create (and all of us use) software, but few if any of us has examined software as a medium. We bumble in the brambles blind to the properties of software, how we change it, and most importantly, how it changes us.

In this talk, I examine the medium of software, how it collectively affects us, and demonstrate what it means for new kinds of software we’re capable of making.

I will be presenting “Understanding Software” at Pivotal Labs NYC on February 25, and if you create or use software, I invite you to come.

Have you ever stepped barefoot on a piece of broken glass and got it stuck in your foot? It was probably quite painful and you most likely had to go to the hospital. So why did you step on it? Why do we do things that hurt us?

The answer, of course, is we couldn’t see we were stepping on a piece of glass! Perhaps somebody had smashed something the night before and thought they’d swept up all the pieces, but here you are with a piece of glass in your foot. But the leftover pieces are so tiny, you can’t even see them. If you could see them, you certainly would not have stepped on them.

Why do we do harmful things to ourselves? Why do we pollute the planet and waste its resources? Why do we fight? Why do we discriminate and hate? Why do we ignore facts and instead trust mystics (i.e., religion)?

The answer, of course, is we can’t see all the things we’re doing wrong. We can’t see how bombs and drones harm others across the world because their’s is a world different from ours. We can’t see how epidemics spread because germs are invisible, and if we’re sick then we’re too occupied to think about anything else. We can’t see how evolution or global climate change could possibly be real because we only see things on a human lifetime scale, not over thousands or hundreds of years.

Humans use inventions to help overcome the limits of our perception. Microscopes and telescopes help us see the immensely small and the immensely large, levers and pulleys help us move the massive. Books help us hear back in time.

Our inventions can help us learn more about time and space, more about ourselves and more about everyone else, if we choose, but so frequently it seems we choose not to do that. We choose to keep stepping on glass, gleefully ignorant of why it happens. “This is how the world is,” we think, “that’s a shame.”

The most flexible and obvious tool we can use to help make new inventions is of course the computer, but it’s not going to solve these problems on its own, and it’s far from the end of the road. We need to resolve to invent better ways of understanding ourselves and each other, better ways of “seeing” with all our senses that which affects the world. We need to take a big step and stop stepping on broken glass.

But I’m not a believer that everyone should podcast, or that podcasting should be as easy as blogging. There’s actually a pretty strong benefit to it requiring a lot of effort: fewer bad shows get made, and the work that goes into a good show is so clear and obvious that the effort is almost always rewarded.

It’s fine to not believe everyone should podcast, but the concept that podcasting should not be easy, that it should be inaccessible and that it’s a good thing, is incredibly pompous and arrogant. It’s pompous and arrogant because it implies only those who have enough money to buy a good rig and enough time and effort to waste on editing (and yes, it is a waste if a better tool could do it with less time or effort) should be able to express themselves and be heard by podcasters. It says “If you can’t pay your dues, then you don’t deserve to be listened to.”

It would be like saying “blogging shouldn’t be as easy as typing in a web form, and if fewer people were able to do it, it’d make it better for everyone who likes reading blogs” (Marco, by the way, worked at Tumblr for many years), which is as absurd as it is offensive.

Podcasts, blogging, and the Web might not have been founded on meritocratic ideals, but I think it’s safe to say anyone who pays attention sees them as equalizers, that no matter how big or how small you are, you can have just as much say as anyone else. That it doesn’t always end up that way isn’t the point. The point is, these media bring us closer to an equal playing field than anything before.

Making a good podcast will never be as easy as writing text, and if you’re a podcast listener, that’s probably for the best.

Making a good podcast will never be as easy as writing text, except for the fact podcasts involve speaking, an innate human ability most of us learn around age 1, and we learn writing (a not-innate ability) at a later age. We spend many of our waking hours speaking, and few people write at any length.

Now, as someone who handles a lot of text on lots of devices, here’s a stylus-based application I’d love to use: some sort of powerful writing environment in which I could, for example, precisely select parts of a text, highlight them, copy them out of their context and aggregate them in annotations and diagrams which could in turn maintain the link or links to the original source at all times, if needed.

Similarly, it would be wonderful if groups of notes, parts of a text, further thoughts and annotations, could be linked together simply by tracing specific shapes with the stylus, creating live dependences and hierarchies.

This is precisely the sort of thing I hoped to rouse with my essay, and I’m glad to hear the creative gears spinning. What Riccardo proposes sounds like a fantastic use of the stylus, and reminds me about what I’ve read on Doug Engelbart’s NLS system, too.

The stylus is an overlooked and under-appreciated mode of interaction for computing devices like tablets and desktop computers, with many developers completing dismissing it without even a second thought. Because of that, we’re missing out on an entire class of applications that require the precision of a pencil-like input device which neither a mouse nor our fingers can match.

Whenever the stylus as an input device is brought up, the titular quote from Steve Jobs inevitably rears its head. “You have to get ‘em, put ‘em away, you lose ‘em,” he said in the MacWorld 2007 introduction of the original iPhone. But this quote is almost always taken far out of context (and not to mention, one from a famously myopic man — that which he hated, he hated a lot), along with his later additional quote about other devices, “If you see a stylus, they blew it.”

What most people seem to miss, however, is Steve was talking about a class of devices whose entire interaction required a stylus and couldn’t be operated with just fingers. If every part of the device needed a stylus, then it’d difficult to use single-handedly, and deadweight were you to misplace the stylus. These devices, like the Palm PDAs of yesteryear were frustrating to use because of that, but it’s no reason to outlaw the input mechanism altogether.

Thus, Steve’s myopia has spread to many iOS developers. Developers almost unanimously assert the stylus is a bad input device, but again, I believe it’s because those quotes have put in our minds an unfair black and white picture: Either we use a stylus or we use our fingers.

“So let’s not use a stylus.”

Let’s imagine for a moment or two a computing device quite a lot like one you might already own. It could be a computing device you use with a mouse or trackpad and keyboard (like a Mac) or it could be a device you use with your fingers (like an iPad). Whatever the case, imagine you use such a device on a regular basis with solely the main input devices provided with the computer like you do. But this computer has one special property: it can magically make any kind of application you can dream of, instantly. This is your Computer of the Imagination.

One day, you find a package addressed to you has arrived on your doorstep. Opening it up, you discover something you recognize, but are generally unfamiliar with. It looks quite a bit like a pencil without a “lead” tip. It’s a stylus. Beside it in the package is a note that simply says “Use it with your computer.”

You grab your Computer of the Imagination and start to think of applications you can use which could only work with your newly arrived stylus. What do they look like? How do they work?

You think of the things you’d normally do with a pencil. Writing is the most obvious one, so you get your Computer of the Imagination to make you an app that lets you write with the stylus. It looks novel because, “Hey, that’s my handwriting!” on the screen, but you soon grow tired of writing with it. “This is much slower and less accurate than using a keyboard,” you think to yourself.

Next, you try making a drawing application. This works much better, you think to yourself, because the stylus provides accuracy you just couldn’t get with your fingers. You may not be the best at drawing straight lines or perfect circles, but thankfully your computer can compensate for that. You hold the stylus in your dominant hand while issuing commands with the other.

Your Computer of the Imagination grows bored and prompts you to think of another application to use with the stylus.

You think. And think. And think…

If you’re drawing a blank, then you’re in good company. I have a hard time thinking of things I can do with a stylus because I’m thinking in terms of what I can do with a pencil. I’ve grown up drawing and writing with pencils, but doing little else. If the computer is digital paper, then I’ve pretty much exhausted what I can do with analog paper. But of course, the computer is so much more than just digital paper. It’s dynamic, it lets us go back and forth in time. It’s infinite in space. It can cover a whole planet’s worth of area and hold a whole library’s worth of information.

But what could this device do if it had a different way to interact with? I’m not claiming the stylus is new, but to most developers, it’s at least novel. What kind of doors could a stylus open up?

“Nobody wants to use a stylus.”

I thought it’d be a good idea to ask some friends of mine their thoughts on the stylus as an input device, both on how they use one today, and what they think it might be capable of in the future (note these interviews were done in July 2013, I’m just slow at getting things published).

Question: How do you find support in apps for the various styluses you’ve tried?

Joe Cieplinski: I’ve mainly used it in Paper, iDraw, and Procreate, all of which seem to have excellent support for it. At least as good as they can, given that the iPad doesn’t have touch sensitivity. In other apps that aren’t necessarily for art I haven’t tried to use the stylus as much, so can’t say for sure. Never really occurred to me to tap around buttons and such with my stylus as opposed to my finger.

Ryan Nystrom: I use a Cosmonaut stylus with my iPad for drawing in Paper. The Cosmonaut is the only stylus I use, and Paper is the only app I use it in (also the only drawing app I use). I do a lot of prototyping and sketching in it on the go. I have somewhat of an art background (used to draw and paint a lot) so I really like having a stylus over using my fingers.

Dan Leatherman: Support is pretty great for the Cosmonaut, and it’s made to be pretty accurate. I find that different tools (markers, paintbrushes, etc.) adapt pretty well.

Dan Weeks: For non-pressure sensitive stylus support it’s any app and I’ve been known to just use the stylus because I have it in my hand. Not for typing but I’ve played games and other apps besides drawing with a stylus. Those all seem to work really well because of the uniformity of the nib compared to a finger.

Question: Do you feel like there is untapped (pardon the pun) potential for a stylus as an input device on iOS? It seems like most people dismiss the stylus, but it seems to me like a tool more precise than a finger could allow for things a finger just isn’t capable of. Are there new doors you see a stylus opening up?

Joe Cieplinski: I was a Palm user for a very long time. I had various Handspring Visors and the first Treo phones as well. I remember using the stylus quite a bit in all that time. I never lost a stylus, but I did find having to use two hands instead of one for the main user interface cumbersome.

The advantage of using a stylus with a Palm device was that the stylus was always easy to tuck back into the device. One of the downsides to using a stylus with an iPad is that there’s no easy place to store it. Especially for a fat marker style stylus like the Cosmonaut.

While it’s easy to dismiss the stylus, thanks to Steve Jobs’ famous “If you see a stylus, they blew it” quote, I think there are probably certain applications that could benefit more from using a more precise pointing device. I wouldn’t ever want a stylus to be required to use the general OS, but for a particular app that had good reason for small, precise controls, it would be an interesting opportunity. Business-wise, there’s also potential there to partner up between hardware and software manufacturers to cross promote. Or to get into the hardware market yourself. I know Paper is looking into making their own hardware, and Adobe has shown off a couple of interesting devices recently.

Ryan Nystrom: I do, and not just with styli (is that a word?). I think Adobe is on to something here with the Mighty.

I think there are 2 big things holding the iPad back for professionals: touch sensitivity (i.e. how hard you are pressing) and screen thickness.

The screen is honestly too thick to be able to accurately draw. If you use a Jot or any other fine-tip stylus you’ll see what I mean: the point of contact won’t precisely correlate with the pixel that gets drawn if your viewing angle is not 90 degrees perpendicular to the iPad screen. That thickness warps your view and makes drawing difficult once you’ve removed the stylus from the screen and want to tap/draw on a particular point (try connecting two 1px dots with a thin line using a Jot).

There also needs to be some type of pressure sensitivity. If you’re ever drawing or writing with a tool that blends (pencil, marker, paint), quick+light strokes should appear lighter than slow, heavy strokes. Right now this is just impossible.

Oleg Makaed: I believe we will see more support from the developers as styluses and related technology for iPad emerge (as to me, stylus is an early stage in life of input devices for tablets). As of now, developers can focus on solving existing problems: the fluency of stylus detection, palm interference with touch surface, and such.

Tools like the Project Mighty stylus and Napoleon ruler by Adobe can be very helpful for creative minds. Nevertheless, touch screens were invented to make the experience more natural and intuitive, and stylus as a mass product doesn’t seem right. Next stage might bring us wearable devices that extend our limbs and will act in a consistent way. The finger-screen relationships aren’t perfect yet, and there is still room for new possibilities.

Dan Leatherman: I think there’s definite potential here. Having formal training in art, I’m used to using analog tools, and no app (that I’ve seen) can necessarily emulate that as well as I’d like. The analog marks made have inconsistencies, but the digital marks just seem too perfect. I love the idea of a paintbrush stylus (but I can’t remember where I saw it).

Dan Weeks: I think children are learning with fingers but that finger extensions, which any writing implement is, become very accurate tools for most people. That may just have been the way education was and how it focused on writing so much, but I think it’s a natural extension that with something you can use multiple muscles to fine tune the 3D position of you’ll get good results.

I see a huge area for children and information density. With a finger in a child-focused app larger touch targets are always needed to account for clumsiness in pointing (at least so I’ve found). I imagine school children would find it easier to go with a stylus when they’re focused, maybe running math drills or something, but for sure in gesturing without blocking their view of the screen as much with hands and arms. A bit more density on screen resulting from stylus based touch targets would keep things from being too simple and slowing down learning.

Jason: What about the stylus as something for enhancing accessibility?

Doug Russell: I haven’t really explored styluses as an accessibility tool. I could see them being useful for people with physical/motor disabilities. Something like those big ol cosmonaut styluses would probably be well suited for people with gripping strength issues.

Dan Weeks: I’ve also met one older gentleman that has arthritis such that he can stand to hold a pen or stylus but holding his finger out to point with it grows painful over time. He loves his iPad and even uses the stylus to type with.

It seems the potential for the stylus is out there. It’s a precise tool, it enhances writing and drawing skills most of us already have, and it makes for more precise and accessible software than we can get with 44pt fingertips.

Creating a software application requiring a stylus is almost unheard of in the iOS world, where most apps are unfortunately poised for the lowest common denominator of the mass market. Instead, I see the stylus as an opportunity for a new breed of specialized, powerful applications. As it stands today, I see the stylus as almost entirely overlooked.

In yesterday’s Apple Keynote, Phil Schiller used almost the exact same phrase while talking about the new Retina MacBook Pros (26:40):

For all the things you love to do: Reading your mail, surfing the Web, doing productivity, and even watching movies that you’ve downloaded from iTunes.

And about the iPad (65:15):

The ability to hold the internet in your hands, as you surf the web, do email, and make FaceTime calls.

It gave me pause to think, “If my computers can already do this, why then should I be interested in these new ones?” Surf the web, read email? My computers do this just fine.

Although Macs, iOS devices, and computers in general are capable of many forms of software, people seem resigned to the fact this sort of thing, “surf the web, check email, etc” is what computers are for, and I think people are resigned to this fact because it’s the message companies keep pushing our way.

The way Apple talks about it, it almost seems like it’s your duty, some kind of chore, “Well, I need a computer because I need to do those emails, and surf those websites,” instead of an enabling technology to help you think in clearer or more powerful ways. “You’re supposed to do these menial tasks,” they’re telling me, “and you’re supposed to do it on this computer.”

This would be like seeing a car commercial where the narrator said “With this year’s better fuel economy, you can go to all the places you love, like your office and your favourite restaurants.” I may be being a little pedestrian here, but it seems to me like car commercials are often focussing on new places the car will take you to. “You’re supposed to adventure,” they’re telling me, “and you’re supposed to do it in this car.”

What worries me isn’t Apple’s marketing. Apple is trying to sell computers and it does a very good job at it, with handsome returns. What worries me is people believing “computers are for surfing the web, checking email, writing Word documents” and nothing else. What worries me is computers becoming solely commodities, with software following suit.

How do you do something meaningful with software when the world is content to treat it as they would a jug of milk?

But I’m not saying you should ignore flow! No: this is no time to hole up and work in isolation, emerging after long months or years with your perfectly-polished opus. Everybody will go: huh? Who are you? And even if they don’t—even if your exquisitely-carved marble statue of Boba Fett is the talk of the tumblrs for two whole days—if you don’t have flow to plug your new fans into, you’re suffering a huge (here it is!) opportunity cost. You’ll have to find them all again next time you emerge from your cave.

When I first saw some of the approaches, which I’ll outline below, I was uncomfortable. Things didn’t feel natural. The abstractions that I was so used to working in were useless to me in the new world.

Smart people like these don’t often propose new solutions to solved problems just because it’s fun. They propose new solutions when they have better solutions. Let’s take a look.

Hold on to your butts, here comes a good ol’ fashioned cross-examination.

On Declarative Programming,

Our first example is declarative programming. I’ve noticed that some experienced developers tend to shy away from mutating instance state and instead rely on immutable objects.

Declarative programming abstracts how computers achieve goals and instead focuses on what needs to be done, instead. It’s the difference between using a for loop and an enumerate function.

On the surface, I agree with this. But for different reasons. The primary reason why this is important, and why functional programming languages have similar or better advantages here is because they eliminate state. How you do that, whether by being declarative or by functional is in some ways irrelevant. The problem is, our current programming languages do a terrible job of representing logic and instead leakily abstract the computer (hey, objects are blocks of memory that seem an awful lot like partitions of RAM…), thus the state of an application becomes a hazard.

But there are also times when eliminating state isn’t an option, and in those cases Declarative languages fall short, too. State is sometimes requisite when dealing with systems, and in that case state should be shown. It’s a failure of the development environment to have hidden state. As Bret Victor says in his Learnable Programming, programming environments must either show or eliminate state. A language or environment which does neither is not going to make programming significantly better, and therefore will remain in obscurity.

Objective-C is an abomination (I love it anyway).

I agree. It’s an outdated aberration. We need something that’s much better. Not just a sugar-coating like Go was to C++ (this was completely intentional, mind you, but if we’re going to get a new programming language, it damn well better be leaps and bounds ahead of what we’ve got now).

It’s a message-oriented language masquerading as an object-oriented language build on top of C, an imperative language.

Actually, originally, the concepts are supposed to be inseparable. Alan Kay, who coined the term “Object Oriented Programming” used it to describe a system composed of “smaller computers” whose main strength was its components communicating through messages. Classes and Objects just sort of arose from those. Messages are tremendously misunderstood concept among Object Oriented Programmers. I’d highly suggest everyone do their reading.

It was hard to get into declarative programming because it stripped away everything I was used to and left me with tools I was unfamiliar with. My coding efficiency plummeted in the short term. It seemed like prematurely optimizing my code. It was uncomfortable.

I don’t think it makes me uncomfortable because it’s unfamiliar, but because things like Reactive Cocoa, grafted on to Objective C as they are, create completely bastardized codebases. They fight the tools and conventions every Cocoa developer knows, and naturally have a hard time existing in an otherwise stateful environment.

It’s inherent in what Reactive Cocoa is trying to accomplish, and would be inherent in anyone trying to graft on a new programming concept to the world of Cocoa. What we need is not a framework poised atop a rocky foundation, but a new foundation altogether. Reactive Cocoa tries to solve the problem of unmanageable code in entirely the wrong way. It’s the equivalent of saying “there are too many classes in this file, we should create a better search tool!” (relatedly, I think working with textual source code files in the first place severely constrains software development. But more on that in some future work I’ll publish soon).

Dot-syntax is a message-passing syntax in Objective-C that turns obj.property into [obj property] under the hood. The compiler doesn’t care either way, but people get in flame wars over the difference.

Dot-syntax isn’t involved with message passing, just involved with calling messages. It’s a subtle but important difference.

In the middle of the spectrum, which I like, is the use of dot-syntax for idempotent values. That means things like UIApplication.sharedApplication or array.count but not array.removeLastObject.

The logic is noble but I think still flawed. I think methods should be treated like methods and properties like properties because semantically they represent two different aspects of objects, because dot-syntax was designed specifically for properties, because Apple Developers advise against it and because it just makes them harder to search for. Not only that, but dots present early binding of methods to objects, which goes against the late-binding principles of the original Objective C design.

It’s also hard because Xcode’s autocomplete will not help you with methods like UIColor.clearColor when using the dot-syntax. Boo.

This is almost always a sign!

[Autolayout] promised to be a magical land where programming layouts was easy and springs and struts (auto resizing masks) were a thing of the past. The reality was that it introduced problems of its own, and Autolayout’s interface was non-intuitive for many developers.

This is another place where there’s an obvious flaw in the way things work in our development environment. Constraint-based systems are notoriously difficult because they normally require all variables to be solved simultaneously, leaving no room for flexibility. This flies in the face of what a computer program writer is used to, and thus is hard to rectify. When combined with an interface which invisibly presents (i.e., does not present) these constraints, developers are left with nothing short of a clusterfuck.

I’ve wanted to believe, and I’ve abandoned Autolayout every year since its introduction because of this. While it does improve year over year, I believe there are fundamental problems it won’t be able to overcome while sticking with the same paradigms we’ve got today.

Springs and struts are familiar, while Autolayout is new and uncomfortable. Doing simple things like vertically centring a view within its superview is difficult in Autolayout because it abstracts so much away.

It’s not because Autolayout is new and unfamiliar but because it adds more, but hidden elements to a layout. It doesn’t abstract too much away, but it makes it impossible to deal with what is presented.

And finally,

Change is hard. Maybe the iOS community will resist declarative programming, as has the web development community for two decades.

Except for the Hypertext Markup Language and Cascading Style Sheets (languages which generate these aren’t solely web development languages any more than Objective C is), of course. HTML is perhaps the most successful declarative system we’ve ever invented.

But actually finally,

We’re in a golden age of tools

I hope at this point it’s clear I disagree with this. Some may say at least today we’ve got the best tools we’ve ever had, but to them I’d suggest looking at any of the tools developed by Xerox PARC in the 70s, 80s, or 90s. And as far as today’s or yesterday’s tools go, I think we’re far from a land of opulence. But there is hope. Today we have at our disposal exceptionally fast, interconnected machines, far outpacing anything previously available. We have networks dedicated to educating about all kinds of toolmaking, from programming to information graphics design, to language design.

We’re in a golden age of opportunity, we just need to take a chance on it.

You see, there’s this thing called FOMO: The Fear of Missing Out (which I’ve previously talked about on this site). It’s a phenomenon about the feeling you get when you see your peer’s activity online, particularly on social networks, and it gives you that empty feeling because you see all the things you’re not doing.

I like to think I am, to a degree at least, somewhat immune to the FOMO. I’m not totally unaffected by it, but I feel like I’m antisocial enough so that at least it doesn’t bother me too much to see others’ activities online.

What does bother me, more and more, is the fear that I’m wasting my life looking at pictures online. When I get to the heart of things, so much of my non-working life online is spent looking at pictures. There’s Instagram and Flickr and Tumblrs. There’s the stuttery sites like FFFFound (design porn) and Dribbble (design masturbation). Then there’s Twitter, which has its own share of photos or links to click (most of the links have lots of photos). There’s Macrumors and the Verge, and there’s my RSS reader, too. Although some of those sources have “news”, there’s almost always too much for me to read in a day, so I skip most of it. Back to the pictures.

These are my sites. When I’ve finished with them, I’ll start channel-changing back with the first ones all over again.

I’m not saying these websites are all bad or even any bad. I’m not saying there aren’t good aspects to them. I’m not saying everyone who uses them is wasting their lives.

I am saying, however, this is what I see myself doing. I have no fear of missing out because so often, WMLOLAP seems to be exactly what I want to do. And that’s why it gives me the Fear, because in reality, I so super very much do not want to do that.

[…] So for instance, in our environment, we would never have thought of having a separate browser and editor. Just everyone would have laughed, because whenever you’re working on trying to edit and develop concepts you want to be moving around very flexibly. So the very first thing is get those integrated.

Then [in NLS] we had it that every object in the document was intrinsically addressable, right from the word go. It didn’t matter what date a document’s development was, you could give somebody a link right into anything, so you could actually have things that point right to a character or a word or something. All that addressibility in the links could also be used to pick the objects you’re going to operate on when you’re editing. So that just flowed. With the multiple windows we had from 1970, you could start editing or copying between files that weren’t even on your windows.

Also we believed in multiple classes of user interface. You need to think about how big a set of functional capabilities you want to provide for a given user. And then what kind of interface do you want the user to see? Well, since the Macintosh everyone has been so conditioned to that kind of WIMP environment, and I rejected that, way back in the late 60s. Menus and things take so long to go execute, and besides our vocabulary grew and grew.

And the command-recognition [in the Augment system]. As soon as you type a few characters it recognises, it only takes one or two characters for each term in there and it knows that’s what’s happening.

I was moved by this bit from John Markoff’s “What the Dormouse Said”, a tale of 1960’s counter culture and how it helped create the personal computer:

Getting engaged precipitated a deep crisis for Doug Engelbart. The day he proposed, he was driving to work, feeling excited, when it suddenly struck him that he really had no idea what he was going to do with the rest of his life. He stopped the car and pulled over and thought for a while.

He was dumbstruck to realize that there was nothing that he was working on that was even vaguely exciting. He liked his colleagues, and Ames was in general a good place to work, but nothing there captured his spirit.

It was December 1950, and he was twenty-five years old. By the time he arrived at work, he realized that he was on the verge of accomplishing everything that he had set out to accomplish in his life, and it embarrassed him. “My God, this is ridiculous, no goals,” he said to himself.

That night, when he went home, he began thinking systematically about finding an idea that would enable him to make a significant contribution in the world. He considered general approaches, from medicine to studying sociology or economics, but nothing resonated. Then, within an hour, he was struck in a series of connected flashes of insight by a vision of how people could cope with the challenges of complexity and urgency that faced all human endeavors. He decided that if he could create something to improve the human capability to deal with those challenges, he would have accomplished something fundamental.

In a single stroke, Engelbart experienced a complete vision of the information age. He saw himself sitting in front of a large computer screen full of different symbols. (Later, it occurred to him that the idea of the screen probably came into his mind as a result of his experience with the radar consoles he had worked on in the navy.) He would create a workstation for organizing all of the information and communications needed for any given project. In his mind, he saw streams of characters moving on the display. Although nothing of the sort existed, it seemed the engineering should be easy to do and that the machine could be harnessed with levers, knobs, or switches. It was nothing less than Vannevar Bush’s Memex, translated into the world of electronic computing.

This bit resonated with me for several reasons, one of which will become clear in the coming weeks. But the really important thing isn’t just that Engelbart recognized a disastifaction with his life and how to fix it. It’s not that he had a stroke of vision to invent so much of what modern personal computers would (mostly incorrectly) base off. What’s really important is that he then went on to see his vision through.

Remember, this epiphany happened to him in 1950, and his groundbreaking “Mother of all Demos” presentation wasn’t until 1968. It might have seemed like something so grand had to come all at once (especially considering how long ago it was), but it took nearly two decades to be reached.

Soon they were sending tweets, socializing on Facebook and streaming music through Pandora, they said.

L.A. Unified School District Police Chief Steven Zipperman suggested, in a confidential memo to senior staff obtained by The Times, that the district might want to delay distribution of the devices.

“I’m guessing this is just a sample of what will likely occur on other campuses once this hits Twitter, YouTube or other social media sites explaining to our students how to breach or compromise the security of these devices,” Zipperman wrote. “I want to prevent a ‘runaway train’ scenario when we may have the ability to put a hold on the roll-out.”

How dare kids enjoy technology. They’re supposed to be learning, not enjoying!

Many users assume — or have been assured by Internet companies — that their data is safe from prying eyes, including those of the government, and the N.S.A. wants to keep it that way. The agency treats its recent successes in deciphering protected information as among its most closely guarded secrets, restricted to those cleared for a highly classified program code-named Bullrun, according to the documents, provided by Edward J. Snowden, the former N.S.A. contractor.

Beginning in 2000, as encryption tools were gradually blanketing the Web, the N.S.A. invested billions of dollars in a clandestine campaign to preserve its ability to eavesdrop. Having lost a public battle in the 1990s to insert its own “back door” in all encryption, it set out to accomplish the same goal by stealth.

But Bezos suggested that the current model for newspapers in the Internet era is deeply flawed: “The Post is famous for its investigative journalism,” he said. “It pours energy and investment and sweat and dollars into uncovering important stories. And then a bunch of Web sites summarize that [work] in about four minutes and readers can access that news for free. One question is, how do you make a living in that kind of environment? If you can’t, it’s difficult to put the right resources behind it. . . . Even behind a paywall [digital subscription], Web sites can summarize your work and make it available for free. From a reader point of view, the reader has to ask, ‘Why should I pay you for all that journalistic effort when I can get it for free’ from another site?”

Why indeed.

Whatever the mission, he said, The Post will have “readers at its centerpiece. I’m skeptical of any mission that has advertisers at its centerpiece. Whatever the mission is, it has news at its heart.”

There you have it. All the major newspaper companies are shrinking, but now the Washington Post has outside investment, allowing it to experiment with new models and for discovering its future.

If the Web is eating your business from the low end, and your competitor has newfound deep pockets, where does that leave your business?

Fifty years ago, an autoworker could provide a middle-class existence for his family. Bought a house. Put kids through college. Wife stayed home. He didn’t even need a degree.

That shit’s over. Detroit just went bankrupt.

No one’s got it better than developers right now. When the most frequent complaint you hear is “I wish recruiters would stop spamming me with six-figure job offers,” life’s gotten pretty good.[…]

No profession stays on top forever… just ask your recently graduated lawyer friends.

Although the autoworkers analogy works, I think there’s a better one for current software developers: We’re like those who were capable of writing long before that ability was shared with the masses.

We have forms and means to express ourselves which are superior (in their own ways) to static writings. For instance, I can write an essay and you can read exactly the thoughts I decided you should read. But I can write a piece of software to also express those points — and other arguments as well — and you the “reader” get to explore my thoughts and in a sense, ask my “thoughts” questions. This is a superior trait over the plain written word.

Since we software developers can express thoughts in ways people who can “only” read and write cannot, we are quite like the privileged folk of centuries past, who could express thoughts in written word which exceeded what could be spoken.

The question is, should we milk it for what it’s worth or should we embrace it as a moral responsibility to give everybody this form of expression?

Garbage is generally overlooked because we create so much of it so casually and so constantly that it’s a little bit like paying attention to, I don’t know, to your spit, or something else you just don’t think about. You—we—get to take it for granted that, yeah, we’re going to create it, and, yeah, somebody’s going to take care of it, take it away. It’s also very intimate. There’s very little we do in twenty-four hours except sleeping, and not always even sleeping, when we don’t create some form of trash. Even just now, waiting for you, I pulled out a Kleenex and I blew my nose and I threw it out, in not even fifteen seconds. There’s a little intimate gesture that I don’t think about, you don’t think about, and yet there’s a remnant, there’s a piece of debris, there’s a trace.[…]

Well, it’s cognitive in that exact way: that it is quite highly visible, and constant, and invisibilized. So from the perspective of an anthropologist, or a psychologist, or someone trying to understand humanness: What is that thing? What is that mental process where we invisibilize something that’s present all the time?

The other cognitive problem is: Why have we developed, or, rather, why have we found ourselves implicated in a system that not only generates so much trash, but relies upon the accelerating production of waste for its own perpetuation? Why is that OK?

And a third cognitive problem is: Every single thing you see is future trash. Everything. So we are surrounded by ephemera, but we can’t acknowledge that, because it’s kind of scary, because I think ultimately it points to our own temporariness, to thoughts that we’re all going to die.[…]

It’s an avoidance of addressing mortality, ephemerality, the deeper cost of the way we live. We generate as much trash as we do in part because we move at a speed that requires it. I don’t have time to take care of the stuff that surrounds me every day that is disposable, like coffee cups and diapers and tea bags and things that if I slowed down and paid attention to and shepherded, husbanded, nurtured, would last a lot longer. I wouldn’t have to replace them as often as I do. But who has time for that? We keep it cognitively and physically on the edges as much as we possibly can, and when we look at it head-on, it betrays the illusion that everything is clean and fine and humming along without any kind of hidden cost. And that’s just not true.

And:

That sort of embarrassment is directed at people on the job every day on the street, driving the truck and picking up the trash.

People assume they have low IQs; people assume they’re fake mafiosi, wannabe gangsters; people assume they’re disrespectable. Unlike, say, a cop or a firefighter. And I do believe very strongly it’s the most important uniformed force on the street, because New York City couldn’t be what we are if sanitation wasn’t out there every day doing the job pretty well.

And the health problems that sanitation’s solved by being out there are very, very real, and we get to forget about them. We don’t live with dysentery and yellow fever and scarlet fever and smallpox and cholera, those horrific diseases that came through in waves. People were out of their minds with terror when these things came through. And one of the ways that the problem was solved—there were several—but one of the most important was to clean the streets. Instances of communicable and preventable diseases dropped precipitously once the streets were cleaned. Childhood diseases that didn’t need to kill children, but did. New York had the highest infant mortality rates in the world for a long time in the middle of the nineteenth century. Those rates dropped. Life expectancy rose. When we cleaned the streets! It seems so simple, but it was never well done until the 1890s, when there was this very dramatic transformation.

The Great Pacific Garbage Patch, also described as the Pacific Trash Vortex, is a gyre of marine debris in the central North Pacific Ocean located roughly between 135°W to 155°W and 35°N and 42°N. The patch extends over an indeterminate area, with estimates ranging very widely depending on the degree of plastic concentration used to define the affected area.

The patch is characterized by exceptionally high concentrations of pelagic plastics, chemical sludge and other debris that have been trapped by the currents of the North Pacific Gyre. Despite its size and density, the patch is not visible from satellite photography, since it consists primarily of suspended particulates in the upper water column. Since plastics break down to even smaller polymers, concentrations of submerged particles are not visible from space, nor do they appear as a continuous debris field. Instead, the patch is defined as an area in which the mass of plastic debris in the upper water column is significantly higher than average.

It is not clear to me that the post WW2 model of national research, largely done in National Labs, Universities, and a tiny amount in industry is the future. In fact before WW2 a large portion of research was done in independent and industrial research labs. I know from the experience of our lab, that we are faster, cheaper, and at least as rigorous, and probably more creative, than good federal or academic research centers.

The first web browser - or browser-editor rather - was called WorldWideWeb as, after all, when it was written in 1990 it was the only way to see the web. Much later it was renamed Nexus in order to save confusion between the program and the abstract information space (which is now spelled World Wide Web with spaces).

I wrote the program using a NeXT computer. This had the advantage that there were some great tools available -it was a great computing environment in general. In fact, I could do in a couple of months what would take more like a year on other platforms, because on the NeXT, a lot of it was done for me already.

The Web was originally built to not only be browsed graphically, but also edited graphically. HTML was not intended to be edited directly.

Goldie Blox: A Building Toy Tailored for Girls. GoldieBlox, Inc. is a toy company founded in 2012 by Debbie Sterling, a female engineer from Stanford University. Engineers are solving some of the biggest challenges our society faces. They are critical to the world economy, earn higher salaries and have greater job security. And they are 89% male. We believe engineers can’t responsibly build our world’s future without the female perspective.

GoldieBlox offers a much-needed female engineer role model who is smart, curious and accessible. She has the potential to get girls interested in engineering, develop their spatial skills and build self-confidence in their problem solving abilities. This means that GoldieBlox will nurture a generation of girls who are more confident, courageous and tech-savvy, giving them a real opportunity to contribute to the progress made by engineers in our society.

You don’t miss what you’ve never had. People talk about sex when you’re 12 years old and you don’t know what they’re talking about - I don’t know what people are talking about when they talk about driving. I grew up with roller skates, a bicycle, using the trolley and bus lines until they went out of existence. No, you don’t miss things. Put me in a room with a pad and a pencil and set me up against a hundred people with a hundred computers - I’ll outcreate every goddamn sonofabitch in the room.

“The most dangerous thought you can have a creative person is to think you know what you’re doing.”

It’s possible to misinterpret what I’m saying here. When I talk about not knowing what you’re doing, I’m arguing against “expertise”, a feeling of mastery that traps you in a particular way of thinking.

But I want to be clear – I am not advocating ignorance. Instead, I’m suggesting a kind of informed skepticism, a kind of humility.

Ignorance is remaining willfully unaware of the existing base of knowledge in a field, proudly jumping in and stumbling around. This approach is fashionable in certain hacker/maker circles today, and it’s poison.

Knowledge is essential. Past ideas are essential. Knowledge and ideas that have coalesced into theory is one of the most beautiful creations of the human race. Without Maxwell’s equations, you can spend a lifetime fiddling with radio waves and never invent radar. Without dynamic programming, you can code for days and not even build a sudoku solver.

It’s good to learn how to do something. It’s better to learn many ways of doing something. But it’s best to learn all these ways as suggestions or hints. Not truth.

Learn tools, and use tools, but don’t accept tools. Always distrust them; always be alert for alternative ways of thinking. This is what I mean by avoiding the conviction that you “know what you’re doing”.

In 1999, Professor Baba Shiv (currently at Stanford) and his co-author Alex Fedorikhin did a simple experiment on 165 grad students.They asked half to memorize a seven-digit number and the other half to memorize a two-digit number. After completing the memorization task, participants were told the experiment was over, and then offered a snack choice of either chocolate cake or a fruit bowl.

The participants who memorized the seven-digit number were nearly 50% more likely than the other group to choose cake over fruit.

Researchers were astonished by a pile of experiments that led to one bizarre conclusion:

Willpower and cognitive processing draw from the same pool of resources.

And:

My father died unexpectedly last week, and as happens when one close to us dies, I had the “on their deathbed, nobody thinks…” moment. Over the past 20 years of my work, I’ve created interactive marketing games, gamified sites (before it was called that), and dozens of other projects carefully, artfully, scientifically designed to slurp (gulp) cognitive resources for… very little that was “worth it”. Did people willingly choose to engage with them? Of course. And by “of course” I mean, not really, no. Not according to psychology, neuroscience, and behavioral economics research of the past 50 years. They were nudged/seduced/tricked. And I was pretty good at it. I am so very, very sorry.

My goal for Serious Pony is to help all of us take better care of our users. Not just while they are interacting with our app, site, product, but after. Not just because they are our users, but because they are people.

Because on their deathbed, our users won’t be thinking,“If only I’d spent more time engaging with brands.”

I’m currently working on an Auto-tweet feature, and you can follow @ospeedoflight for updates there. It might be a little bumpy over the weekend as I iron out the bugs, but it should be a good way to stay up to date with all the things I publish here. There’s also a problem of some older articles missing, and there’s also a missing link to my archives. Those should be resolved this weekend, too.

When I first heard of Frank Krueger’s new app Calca, a fantastic re-imagining of a calculator, mathematical environment, and a text editor, I was hooked. On the surface of its functionality, Calca resembles Soulver, another real-time “calculator” program for your computer. But when you really take a deeper look at Calca, you realize there’s much beneath the surface.

In addition to being a perfect example of how math should be done on a computer (as opposed to making a window with buttons labelled 0-9), Calca gives you instant feedback and instant evaluation. It evaluates as much as it can, given the information it has, but it’s thoughtful enough to be OK when there are unknowns, patiently displaying the variables in-place.

More than just a calculator, Calca allows for Markdown formatted text, with mathematics and their evaluations appearing precisely where they are designated.

I just had to know more about this app and the work that went into it, so I’m delighted to present my interview with its developer Frank Krueger. I hope you enjoy it as much as I did.

Interview

Jason: As you explain on your website, Calca grew out of a frustration. How did you come to the decision to solve that with Calca? Had you explored some other ideas first, or did Calca come about all at once?

Frank: In one important way, I have been thinking about Calca for a long time. I found that whenever I needed to do some algebra, perhaps convert from a screen coordinate to a 3D coordinate in one of my applications, I would do my “heavy thinking” in a plain text file. I would write out examples and then go line by line manipulating those examples, and any other equations I could think of along the way, until I had something that I could convert into code.

So I’ve always had these text file “derivations”. But it’s a terrible system: Sometimes I actually needed to do some arithmetic that I couldn’t do in my head. So I would switch to Mac’s Calculator app, or query Wolfram Alpha or use Soulver, and then switch back to my text file and plug the results in. Also, because Copy & Paste were my main tools for doing algebra, I was always suspicious of my own work.

So Calca, you could say, came about all at once as the realization that I should just write a smarter text editor to handle these files I was creating - one that knew arithmetic and algebra, had features from the programming IDEs I use every day, but still tried its best to stay out of your way (programming IDEs are often too rigid to explore ideas.)

The idea that Calca should update as you type was an assumption from the start. I certainly wasn’t going to hit Cmd+R whenever I typed something new!

Jason: What’s your background?

Frank: I have been writing software professionally for 15 years now. I was lucky enough to intern at General Motors in an electronics R&D group writing embedded systems software while I was young. That job taught me a lot about engineering in general, but also about computers. I was writing embedded code in assembler and C and diagnostic tools in C++ before college. On the side, I was also active in groups writing level and content editors for video games.

In college I earned a Master’s degree in Electrical Engineering specializing in control systems (at RIT in NY). I mention this because it was the time when I became most comfortable with mathematics. Modern engineering degrees are sometimes hard to differentiate from applied math degrees!

From there, I moved to Seattle to work at Microsoft. That was a wonderful experience learning how big companies write software, but wasn’t fulfilling enough. So I packed up, went to India and started a company with an old friend building control systems for naval ships. That is the same company I operate today, Krueger Systems. But we ended up “pivoting” (in today’s parlance) to web development consulting and I spent some time doing that.

The introduction of the iPhone freed me from the terrible world of web development (this was before JavaScript’s ascendance, web dev is better now). Starting in the fall of 2008, I wrote a huge assortment of apps, some of which even shipped. I was basically in love with the iPhone and was just doing my best to create interesting software for it. I didn’t make much money from these apps, but I’m still proud of a bunch of them.

The introduction of the iPad changed everything for me. I started writing bigger and more interesting apps, and they started selling better. iCircuit is my flagship app and my obsession since July 2010 - another app that I wrote as a gift to my 1999 self. It’s a circuit simulator that is constantly showing you results - it has been quite popular with students and hobbyists.

So I’ve been writing iOS apps for nearly 5 years and have been making a living from them for 3.

Jason: What roll do you see Calca filling? To me it seems kind of like a melding of a word processor, a spreadsheet, and a calculator all in one. As the creator, how do you see it?

Frank: For me, Calca enables:

Quick and dirty calculations

No matter how many literate constructs I put in Calca, it is, at it’s heart, a calculator. I mean for it to replace the Calculator.app. It only takes 1 more keystroke in Calca to accomplish what you can do in Calculator, but once you’re in Calca you have an arsenal of math power at your finger tips.

I’ll admit that I still use Spotlight for my simple two-number arithmetic problems. But the moment I have more than two numbers, or two operations, I switch to Calca.

Development and Exploration of complex calculations

Calca makes playing with math - no matter how complex - easy. Even after I used it to verify or derive some equation, I often sit in the tool just playing with inputs to see how the equations change. Perhaps I’m strange, but it’s fun to be in a tool that encourages exploration and experimentation.

Thought out descriptions and explanations of research or study

I remember writing lab reports in college and struggling with Word’s equation editor. They’ve improved it, but I have never gotten over just how bothersome the tool was for creating original research. Even after you finished your battle with the WYSIWYG equation editor, you were just left with a pretty picture. Even if you move on to TeX and solve the input problem, you’re still stuck with pretty pictures that do nothing. There was no way to test and prod your equations, no way to give examples of their use. They might as well have been printed by Gutenberg. And so Calca is a step in the direction of a research paper writing tool that actually tries to help you.

Soulver does not support user defined functions; to Calca, everything is a function

Soulver has no programming constructs; Calca is one step away from being a general purpose programming language.

Soulver does not support engineering math: matrices, derivatives, etc.

Anyway, I don’t want to go on since I do love Soulver. It was a first step to getting away from Excel and old-fashioned calculators - Calca is an attempt to build upon that progress.

Jason: When I first saw Calca, as a live-editing mathematical environment, I was reminded of Bret Victor’s Kill Math project (specifically his “Scrubbing Calculator”). Do you think Calca provides a better interface for doing Math than the mostly “pen-and-paper” derived way we do it today?

Frank: Yes, I do believe it’s superior to real pen and paper since eventually you will want the aid of a machine to do some math for you.

But I see tablet-based “pen and paper” as a perfectly valid input mechanism. In fact, the earliest versions of Calca, from years ago, used hand writing recognition for input. I was trying to avoid the keyboard all-together.

What I found was that there are some hard problems in recognition to accomplish (1) fast unambiguous input, (2) recognition of dependencies (order), and (3) high information density on the screen. It’s easy to create fun tech demos but very hard to create reliable apps that won’t just frustrate you.

So, for now, the keyboard is a superior input device - especially on the desktop. But I am completely open to more natural mechanisms.

Jason: I should clarify, when I say “pen and paper” I don’t just mean writing in the sense of hand-written math, what I should have said was “traditional symbolic” math. Whether written on paper or typed, something like “10 = 2x + 2” is represented the same way.

Do you think the traditional symbolic notation is sufficient for math, or do you think it can be improved with the benefits afforded by a computer (near-instant computation, simulation, limitless memory, graphical capabilities, etc.)? Symbolic notation is convenient when all you’ve got is paper, but do you have any thoughts on a “better interface” for math?

Frank: Oh, ha! Yes, I completely mis-understood.

That’s quite the philosophical question and I hope you’ll forgive that what I say is stream of conscience. While I have given the question a lot of thought in the field of robotics and engineering, I have not really considered how new tech like this influences mathematics.

So let’s start with YES. Simulation, near-instant Simulation (like iCircuit), Visualization, yes these are often superior representations of mathematics/physical systems. I’m reminded of Bret Victor questioning why we jagged little lines to represent resistors when computers give us the ability to represent the resistor more richly as an as a graphical IV curve (current vs. voltage). I am on board with this.

MATLAB’s Simulink is an early example of trying to improve design work using visualizations. Now their semantics are old and not very powerful, and their visualizations are not real time, but it gets the job done - it’s an improvement over writing procedural code that must interact with real complex systems.

I have high hopes that these modern tools can progress to allow arithmetic of their visualizations for these myriads of applications. Imagine Bret’s resistors in series so that their curves combine into one resistor. Cool.

My only concern with these computation intensive visual tools is how well they will handle algebra - the manipulation and combination of all the symbols that are used to create this data. When I design an amplifier using an operational amplifier, I know that the gain is:

G = 1 + Rf/Rg

Now let’s say I’m in Bret’s tool that has curves instead of resistors. While he makes it easier to play with the values of Rf and Rg and see the effects of this, he doesn’t make it easier to visualize the gain equation, this algebra. I can play with the values all day long and never meet my design goal. Or worse, I stumble upon one of the solutions (there’s an infinite number) and assume that it’s the only solution.

Just like with a traditional circuit schematic, the designer can’t do his job reliably unless they know the rule: G = 1 + Rf/Rg. Naive visualizations and computations don’t instruct they only re-inforce what you already know. (Unless of course you stare at the visualizations long enough and self-instruct yourself to understand the equation.)

Now let’s say I’m an EE student from 50 years ago with no simulation software and no visualizations. Some how, I still need to learn enough to get us to the moon. That little equation, G = 1 + Rf/Rg is all you need to design the correct amplifier - no computers of software necessary. No only that, but they could probably derive the equation using just three others: v = i * R, Vout = A*(Vp - Vm), A = Infinity. There is power in being able to manipulate symbols.

So what I’m getting at is this: I have high hopes for these modern visualizers. I just hope that they find ways to increase our understanding and not decrease it (by not exposing governing equations). If they don’t increase our understanding, then we might as well stick with Calca. :-)

I hope you can make some sense of that.

Jason: You said you developed an LALR(1) parser generator for Calca. What is your background on writing parsers? How did you go about creating the parser?

Frank: I didn’t write the generator, just the grammar. The generator is jayc, which is a port of jay which is an implementation of yacc’s parsing algorithm. :-)

I wrote my first parsers back as that young intern at General Motors. When you work with embedded systems, you really learn what a compiler does. I spent a lot of time comparing my C code to the machine code that the compiler would generate. This ended up being a fantastic way to internalize, or grok, how computers work. Understanding compilers was then just a matter of writing some code to convert between the two.

I remember learning to write my first parser using Stroustrup’s The C++ Programming Language book. He has a chapter in there where he builds a little calculator utility. It’s still one of the best introductions I know of to recursive-decent parsing. It was mind opening as a young programmer. From there, I gorged myself on everything Niklaus Wirth had to say on the subject (and every other subject), and wrote scheme interpreters (following SICP) over and over again until I understood language design and implementation tradeoffs.

Calca’s parser is basically a yacc file from 1970. You start with IDENTIFIER and NUMBER tokens, and slowly build your way up to recognizing your full language. It’s actually quite fun. If you like using regular expressions in your programs, you have no idea what you’re missing by not using full blown parsers.

Jason: What technologies did you use for making this project? Did you build upon open source or did you roll it all your own? How did you build the project over the course of three months (did you have many beta testers, how did the app evolve, etc.)?

Frank: The app is 100% my code written in C#. I use Xamarin tools as my IDE and to compile the app for iOS/Mac/whatever else I’m in the mood for. C# provides a lot of benefits over older languages especially when it comes to writing interpreters and you’re bracingly re-writing expression trees.

I did look to license a CAS (Computer Algebra Library) but couldn’t find a good match for what I wanted and that had workable license terms.

At first, the app didn’t have big ambitions on the mathematics side so it seemed perfectly reasonable to write my own parser and interpreter. I had done that many times before so there wasn’t too much risk. While Calca’s engine is only now getting to be on par with these more mature libraries, and I apologize to users who actually need all the power of Mathematica, having a language and execution model that is specifically designed to be humane has benefited all of us.

There was a core of 3 beta testers working over the entire development time of the app. I know, not a lot of people, but we didn’t know that any one else would like it! :-) To help make up for the lack of testers, a pretty extensive set of automated tests have been developed. Last I checked, I run about 3,000 unit tests whenever something changes to make sure that the engine is consistent.

That said, having 1,000 people use your app is quite different than 3 so I’ve been fixing a fair share of v1.0 bugs. :-)

Jason: Most of my peers (self-included) work in a lot of Objective C. What differences have you found between it and C#? How do you find iOS development with a non-Xcode environment?

Frank: Ah, I didn’t know, I wouldn’t have been so glib.

There are two big difference between C# and Obj-C is the garbage collector and the more succinct syntax. As for the GC, JWZ said it best, “I have to admit that, after that, all else is gravy”. Having one around instead of manual memory management, or reference counting, or even some automatic reference counting frees you to focus on computation instead of data management. There is a correlation between the popularity of scripting languages and the fact that they have a GC.

On top of that C# just has a lot of syntactic convenience. It’s a two-pass compiler so you don’t have to repeat declarations. There is a unified safe type system (like Obj-C’s id, except that it applies to everything, including C code). It has modern language features: generator functions, list comprehensions, closures that can capture variables, co-routines that can be written using procedural syntax, and on and on. There is also a giant class library and a huge 3rd party ecosystem. The real benefit is that it can still access C code and functions natively so I can use the entire SDK. So mostly pros, and just a few cons.

My first two years of iOS development were in Xcode, but it was a younger version so it’s hard for me to compare the two IDEs. Generally speaking, Xamarin’s tools have superior code completion and debugging visualization. But both IDEs are mature powerful products, and I honestly like them both.

I have not had any real problems being “non-Xcode”. There was that silly scare years ago when Apple was trying to block non Obj-C apps, but that was an obvious blunder on their side so I never concerned myself with it.

Jason: Where do you see Calca going forward?

Frank: Well, I’m submitting v1.1 to Apple today (for both platforms) that fix a lot of the v1.0 bugs (Note: v1.1 has since been released to the App Store).

After that, there is a lot of low-hanging-fruit that I cut for v1.0:

Unit conversion

Plotting

Solving differential equations

Larger library of utility functions

These are all features I want myself, so they’re guaranteed to be added. I’m also actively listening to feedback. This v1.1 release is essentially all fixes from bugs submitted by users.

After all that, we as a Calca community have to see how to proceed. As I use it as a tool every day in my own work, the app will be maintained as long as I’m still working. But we as a community will have to decide what additional features would make it more powerful, or if it, itself, should be eventually supplanted for something better. Always advance the state of the art!

Jason: It’s funny you should mention plotting. I’ve actually been working on a math-related program in my spare time lately, specifically to do with graphical representations of formulas, so when I found Calca, it caught my eye. Do you think plotting is an essential function missing (most of) today’s “calculators”? Will you be approaching some of Mathematica’s functionality?

Frank: Plotting is a very important feature.

I have spent a lot of time in Grapher.app playing with different blending functions:

y = x
y = x^2
y = sqrt(x)

I would just look at the graphs and decide which one met my needs. Now I could do this with a calculator by asking for its derivatives at the points x = [0, 1] but that’s tedious and silly. It’s much easier to see a plot and just pick the one that seems right. I want that feature in Calca.

I plan on supporting plotting of one parameter functions like: g(x) = sin(x) + 1/x^5. These will be displayed as graph’s similar to Grapher’s.

I also will try to add two-parameter functions: h(x, y) = if x > 0.5 && y > 0.5 then 1 else 0. These will be displayed as images.

I’m not sure how far to go from there - I will simply listen to feedback and see what more features people need.

Poetreat is a delightful iOS application by developer Ryan Nystrom designed to do one thing well: Let you write poems and discover rhymes.

The app has a simple but beautiful user interface, which starts with its colourful feathered icon. The main interface should look familiar, yet unique, and fits in well with standard iOS apps. You’ll know how to use it.

From the main interface, you can start typing your poems as you’d type anywhere else, but Poetreat analyzes your text as you type, and helps suggest rhymes for you. It doesn’t just suggest them anywhere though, it also has a sense of the semantics of your poem: it keeps track of the structure of your poem to provide rhymes when you need them according to its metre (things like ABAC structure, for example).

Poetreat is free in the App Store, with a $0.99 In-App Purchase to unlock iCloud syncing and custom themes.

The app really caught my eye when I realized what it did about recognizing the structure of your text, so I asked Ryan if he’d be interested in an interview. You’ll find our conversation below:

Interview

Jason: So first off: What’s in a name? How did you come up with the name for Poetreat?

Ryan: The name Poetreat came out of the blue. The app idea was always set in stone, but the name was really tough. I wanted to find something unique that conveyed what the app was meant for. I never intended the app to be used for poems longer than 8 or so lines, so I was thinking along the line of “snippet” or “piece”. After a late night brainstorm I opened the pantry to grab a snack, and that’s when it hit, it was a “treat”. I really liked the word because its supposed to be a small portion but also delightful.

Jason: Why did you decide to make this app? Did you stick with the same concept from the beginning or is what we see here what an original idea became?

Ryan: Originally I made an Objective-C port of a PHP library for text readability called RNTextStatistics. I created that project just for fun and because there was really nothing out there for Objective-C developers to drag and drop into their projects that offered this sort of analysis on text. It turned out to be a doozy of a project, but an incredible learning experience. The syllable counting naturally lead into thinking about meter and rhyme. That’s when I decided to create a poetry app.

Jason: What was the development of the app like? How long have you been working on it?

Ryan: The app actually didn’t take long to develop at all. Call a rhyming API, store some data locally, sync with a backend service, and it was done. I’d estimate just a month or two of tinkering at night and it was completed. For every side project I take on I decide to use a new technology I’m unfamiliar with. One of the big ones for Poetreat was Core Data syncing with AFIncrementalStore. This is a really amazing project. I ended up spending more time creating demo projects to show off how easy it is to sync Core Data with a web service using AFIncrementalStore.

Jason: Poetreat has a lovely and useable interface. What went into the making of that? Was it all custom or did you use open source components or a mixture?

Ryan: You know how I mentioned the development was pretty quick and easy? Well the design was quite the opposite. I’m a developer by education and talent, not a designer. However I really want to be self-reliant because I am sometimes overly critical on other’s work. It makes working in a developer-designer pair really difficult for me. I decided to design this app 100% by myself, but that required a lot of time spent on Dribbble researching what others had done. I created about 5 style guides for Poetreat before I finally settled on the live app.

That isn’t to say I didn’t have help. The designers at my day job helped tremendously with feedback. A couple times they even watched over my shoulder in Photoshop and gave me some tips and tricks.

Jason: The syllable counting feature is something I’ve never seen in an app before. How does that work? What other things do you do with natural language in the app?

Ryan: That all comes from my open source text parsing library RNTextStatistics. Dave Child was the original author of the PHP version. I learned a lot about readability scoring in the process though. Its even spurned my next app that is entirely about readability scoring and improving. The big tests in readability are:

Jason: What are some of your favourite rhymes you’ve found with the help of Poetreat?

Ryan: You know how everyone always uses the word “orange” as the impossible rhyme? Poetreat can actually find some pretty good rhymes for “orange”, my favorite being “lozenge”. Its not exact, but its not a word I’d of ever come up with!

Jason: You’ve previously mentioned to me this is your 7th app for the App Store and you’re really looking to make this a winner. I’ve had my own App Store woes in the past as well. What have you learned from your experiences in the past and how are you using that to your advantage this time around?

Ryan: Well I’ve not learned, but been told that you have to spend money in marketing. I went with two app “marketing” websites:

and filed an official press release. All in all I spent maybe $250, definitely not that much. However now that it’s been 2 weeks I can tell you I will never do that again. Total waste. No reviews, nothing in Google News except the official Press Release. I know that Poetreat isn’t going to set the world on fire, but I think it’s unique enough to warrant some talk. I’m sitting at about 3,500 downloads now, which is by no means a failure. But both of those services above have tracking and analytics for who reads and actually reviews your stuff. I’ve gotten 0 press, most of my downloads are purely by word of mouth and community. I emailed about 15 people on launch day (including yourself!) to take a look at it. Everyone responded with wonderful criticism. I was really happy. I could have only done that and been fine.

I also spent, in my opinion, way too long on design. I could have released this app in November with default UIKit design and it would behave exactly the same. I’m planning on going that route next time. Some UIAppearance and some CALayer animations, but I’m done spending hours in Photoshop.

Jason: In-App Purchase seems to be a popular route these days in the App Store. Why did you decide to offer the app free with IAP?

Ryan: Because its the trendy thing currently, and I wanted to see why. However I’m finding 0 difference in money earned between this and my paid apps. I’ve got about a 2.5% conversion, with 3,500 being about 87 sales, netting me roughly $61. Now I’ll admit that my IAP doesn’t unlock the most amazing features, I definitely went with a freemium model.

(Good News: This interview was conducted a little before WWDC 2013, but Ryan sent me some updated info about how Poetreat is fairing. He says:

Since I sent this email Poetreat got featured in the New & Noteworth, hit #27 in US, #1 US in the Lifestyle category.

Awesome!)

Jason: Almost completely unrelated, but have you ever considered turning the syllable counting and rhyming analysis into a multiplayer/turn-based game? It seems like it could make for an interesting “Rhyme with Friends” kind of game.

Ryan: Abbbsolutely. This is on my palette as a possible followup. It’d be even better to go Loren’s route and use Game Center so I don’t have to muck with servers again. Something like “build a poem together”. However I’ve already tackled a poetry app, and its shown me that sales won’t be enough to motivate me if the idea isn’t exciting.

Inventions and Visions

I look at all my computing heroes and I see many of the great accomplishments they’ve made. I see many great inventions they created and gave to all of us. I see how they’ve enhanced computing for the betterment of all, and I’ve been trying to find a way to contribute in a meaningful way. I think to myself, “If I could have invented just one of the things they’ve made, even if it took me a lifetime, I’d be happy.” I look on in astonishment and I can’t conceive of how they made their great inventions. At least not until recently.

What I’ve come to realize is all the heroes I look up to, all their inventions weren’t created for their own sake, but were instead created along the road towards a Vision. Doug Engelbart might have helped invent the mouse, hypertext, and collaborative software. Alan Kay might have helped invent the modern Graphical User Interface, the laptop computer, and Object Oriented Programming. But none of these things were inventions for their own sake: they were simply the natural fallout of the vision these people were working with.

Doug Engelbart didn’t set out to invent hypertext, he set out to Augment Human Intellect, and creating a form of non-linear text navigation was just a natural consequence of this vision. Of course he invented hypertext, there’s no way he could have avoided it on his journey.

Alan Kay didn’t set out to invent the Smalltalk programming language, he set out to create a democratized Personal Computer for Children of All Ages, where every part of the system was malleable and executable by any user. Smalltalk was never the goal, it was “just” (emphasizing because in reality, it’s of course a tremendous technical achievement) a vehicle to the next step in the vision.

Seymour Papert didn’t set out to build an electronic turtle and the Logo programming language to power it, he set out re-imagine what education looked like when the flexibility and dynamic behaviour of the computer was allowed to play a starring role in how a child learned to think and reason. The LOGO programming language wasn’t the target, instead, it was an arrow.

Doug Engelbart had a vision to augment human intellect.

Alan Kay has a vision to democratize computing and create a more enlightened society.

Seymour Papert has a vision to unshackle education from the paper and pencil, and create a society fluent in higher-level mathematics and reasoning, enabled by the computer.

Bret Victor has a vision to “invent the medium and representations in which the scientists, engineers, and artists of the next century will understand and create systems.”

I’ve spent so much of my life with my eyes and mind keenly focussed on the inventions of others, blatantly ignoring the purpose of those inventions. It’s like Shakespeare is trying to tell me a story and I’m marvelling at his pencils.

I get so caught up on the inventions themselves I can’t possibly fathom how I’d ever invent anything of that kind of magnitude. But I’m looking at it all wrong. If necessity is the mother of all invention, then I need a necessity. Inventions aren’t the point, they’re just the fruit that falls out of the tree as it reaches to the sky.

Negative Space

I don’t have a vision.

I need a vision.

While I have lots of goals, both short and long term, I consider those separate from a vision, because a goal implies there’s an endpoint. I think with a vision, it’s an on-going thing, with a target forever challenging you to keep moving forward.

I don’t know what my vision is, but until I figure that out, I can look at what it is not. Maybe by carving out the negative space around it, I’ll be able to form one in what’s left behind.

A Vision for What I Don’t Want

In twenty or fifty years, what I don’t want is for people to still be using “apps.” Computer programs as isolated individual little packages, operating independently and ignorantly of one another is not something I want to see in my future. I don’t want computer software to continue to be a digital facsimile of physical products on a store or home shelf.

In twenty or fifty years, what I don’t want is for software to be coded up exclusively in textual formats, which are really just digital analogues of paper punch cards. I don’t want to have to type in code in basically a text editor, have some compiling program spit out a binary, and then have the system launch the software once again for the very first time, leaving me to imagine what the program is doing. This is an antiquated way to build software, and it has no place in my vision.

In twenty or fifty years, what I don’t want is for professional software developer to be a common, mainstream job like it is today. There are so many great minds in every field in the world, from the sciences, to medicine, to finance, to families, and they’re all at the complete behest and will of professional software developers. A scientist or artist cannot create their own digital tools, as the world exists today, and instead must rely on software developers. This has no place in my vision. I want every person to have control over what they can make on a computer, so much so that it puts software developers like me out of a job. There might be a few of us left around, for things like low-level systems programming, but otherwise, I don’t want my job to exist.

Finally, in twenty or fifty years, what I don’t want is for children to grow up in the same world we have today. I don’t want the education system to continue to ignore the computer for its true capabilities, and instead cling to teaching everything as if we only had paper. I don’t want children to be manipulating “2x + 4 = 10.” I don’t want them to be trapped by paper, but instead I want them to have fewer restrictions on their imaginations. I don’t want them to think of computers as binary beasts of “yes or no”, “right or wrong”, but instead as a digital sandbox where errors and mistakes and messiness is encouraged and explored.

I don’t want my children to grow up in the world I grew up in. I don’t want their education to be the same, I don’t want their environment to be the same.

Encircling a Vision

All of that is to say I’m trying to find what I want to work on for the rest of my life. I’m trying to find a driving force, an inferno and dynamo which will power me and propel my work. Levers and pulleys are wonderful things, but they’re artifacts to help me on my way. I’m trying to build a civilization.

I’ve been thinking a lot about the essay I published last weekend, “Addition, Multiplication, Integration”. In it, I laid out the basics, the vapours of a conceptual framework for building software faster (although the more I think about it, it might not just be about faster, but also about better). The gist of it was:

Building on the work of others with shared/open source code is like Addition.

Building software with others, collaboratively, is like Multiplication.

Building new tools to help us reason about and trivialize the tricksy problems, those beyond our current abilities to easily juggle in our heads, is like forms of Integration and Derivation.

This was the really important part of the essay. Software developers so often get caught up on the trivial, yet devilish bits of writing programs, where they’re either facing common mistakes and bugs, or they’re facing things they can’t easily think about (dealing with higher dimensions, visualizing large amounts of data, memory, large computations, etc.). By building new tools to help us reason about and truly trivialize these sorts of problems, it should have the effect of making these problems less of a roadblock, and so we can work faster.

I chose the Integration/Derivation metaphor because not only is that a useful way to arrange things, it also works as sort of a compactor, squishing a higher dimension into a lower one, and spreading the details around. Tricky problems which once occupied an entire plane sprawling in both directions can, once trivialized with tools, be squashed down to a single point. Once an immense vastness, now finite and graspable. This is kind of like Bret Victor’s Up and Down the Ladder of Abstraction

What I’ve been thinking more about since publishing it, and where the arithmetic and calculus metaphors break down, is that these “levels” aren’t really mutually exclusive but instead they feed and fuel each other. Using open source tools and working with others helps build new tools faster, the kinds of tools described in the third level, for solving trickier problems. Well, once those trickier problems are solved, then it can allow us to create more open source code, where those problems are solved. That helps us work better together, too. It’s a positive feedback loop felt throughout the system.

This put me in mind of something I’ve always felt with respect to my own education, that is to say I like taking beginner classes and reading beginner articles. But I have had trouble putting the reason into words.

Chances are you’ve had a lot of teachers. Stop and think about it. I have been to one junior high, three high schools, two colleges, and two universities and would not care to estimate the total number of teachers because odds are the estimate would be too low.

Right from the time you left grade school and entered junior high it has been a different teacher/educator/mentor/guru/wikipedia editor/your title here, for everything you have undertaken to learn and at every level of expertise. Each one of these educators has spoken from a different base of experience.

Preface

This article acts as a compliment to Pull Requests Volume 1, which focused on writing great pull requests. Now I’m going to focus on the other side of things, how to do a great job as a reviewer for a pull request.

Like in the first article, I’m going to write this from the perspective of a reviewer on a team of developers working on an iOS app, using GitHub’s Pull Request feature. However, many of the things I’ll discuss apply equally to any kind of software and any kind of version control system. These guidelines are based off my experience working at Shopify and The New York Times.

The overarching theme behind both of these guides is to give examples to help software developers work better together. I’ve seen too many examples of ego getting in the way of quality. Software developers are professionals, but we often have difficulty with social things, especially interpersonal issues. That’s not only limited to how one developer gives criticism, it also includes how another developer takes criticism.

The important thing to realize here is when reviewing code, as in doing any professional activity involving developers, is this is not supposed to personal. It’s not about making one person feel good or another feel bad. It’s about improvement, both for developers and the software they make. Keep these things in mind when you’re acting as a reviewer, or when you’re receiving feedback on a pull request you’ve made. It’ll make you both better developers.

You are the Gatekeeper

As a code reviewer, you are acting as a Gatekeeper for you application’s codebase. No matter what you do while reviewing a pull request, the end question has to be: “Does this improve or worsen our codebase?” Your mission is to not accept the pull request until it improves the project, but how you accomplish that varies from team to team and project to project. Here are the things I think are most important to an iOS project, and most of these things will apply to any kind of project.

Read the Description

This one should be so obvious it’s almost not worth mentioning, but it’s important enough to still warrant being said: as a reviewer your first task is of course to read the description of the pull request as provided. Hopefully, the developer you’re working with wrote you a detailed one. This is the step where you become familiar with the issue that’s being solved. You might need to read up on a specific issue or story in your company’s issue tracker to do this, or it might be something so simple it was explained fully in the description itself.

Verify it

The next thing you should do as a reviewer is Verify the pull request accomplishes what it has set out to do. How you do this depends on how your team works.

In the most basic case, this means reading the source code, hopefully guided by the description provided, looking for how the code works. You’ll want to look at the code difference to see what was deleted and what was added. Here’s where you can spot any immediate issues, like an edge case clearly missed, or other common problems like incorrect use of comparison operators (I’ve been guilty of many more lesser-than-or-equal-to bugs than I’d like to admit). Reviewing pull requests requires a keen eye for things like this.

For teams who do their own QA, this is also the time where you’ll do your testing. This means checking out the code locally, running it, and following the test cases provided by the developer. In the best case, the developer has provided lots of cases for you to test against, but here’s where you might find your own too. Good things to look for here include strange input (negative numbers, letters vs numbers, accents and dìáçrïtîcs, emoji, etc.), multitouch interaction, device rotation or view resizing, device sleep states and foreground vs background issues, to name but a few.

If the project has unit tests, make sure the pull request tests all new functionality as needed, and that all the tests pass. GitHub has recently introduced a pull request feature to integrate with build servers, so the tests can even be run automatically before the pull request is merged in, but if not, you can always run them locally.

Above all, you want to make sure this code does what it intended to do and doesn’t introduce any new problems.

Code style

While you’re looking at the code, you should be checking to see if it conforms to your project’s style guide. Even if your project doesn’t have an explicit style guide, you probably have a good idea of the general app style in your head (and you should still consider creating an explicit guide).

Don’t be afraid to be diligent here, because even though any style violations may be minor, pointing them out isn’t petty. They add up. There’s nothing wrong with pointing out issues with whitespace, brace location, naming conventions, especially not when there are multiple slips. Both parties should be aware fixing these issues helps makes the code more coherent and consistent for everyone going forward.

Accessibility

Make sure the pull request includes proper accessibility for all new interface elements. This is really important to do as you build your application from the start because it can be done incrementally, and your customers will thank you for it. It’s simple enough to build the bare minimum accessibility into your app this way, but if you want to do a stellar job, consider talking to Doug about it.

Any pull request that ignores accessibility features should not be merged into your project until the omissions are fixed.

Localization

Much like accessibility, you should reject any pull request that doesn’t localize user-facing text in your application. This doesn’t mean the request has to include translations, it just means that any string added to the app for user-facing purposes should be localizable.

Even if your project does not currently offer localizations for non-English locales, every pull request should still include this, so your project can be expanded to include other locales at your whim. Pull requests not including localized strings should not be merged in until they’re fixed.

Documentation

This is one we’ve started doing recently on our team: every pull request that introduces new public API needs to have that public API documented. Since a forthcoming version of Xcode is going to include Headerdoc and Doxygen doc-generation built-in, we figured it was a great time to start writing docs formatted with them (we chose Headerdoc because it seems to be what lots of Apple’s headers are already documented with, but since both formats are supported, it matters less which you pick and more that you are consistent with your choice).

It’s senseless to include docs for every bit of code in the app, so we’ve set the bar at only methods and properties in our public interfaces. Private methods don’t make a whole lot of sense to document, generally speaking, because they’re often subject to change or are too internal to warrant the effort (although there’s really nothing wrong with documenting private methods, either).

Forcing a developer to include documentation for their API forces them to think more lucidly about what their API does, what parameters it takes, and how it works. In the course of writing documentation, I’ve realized once I “read it out loud” that I’d make code needlessly complicated, and immediately figured out a better way to write it. All this just by trying to explain in documentation what the code does.

As a reviewer, you should reject any code that doesn’t live up to this standard. As new code gets merged in to your project, more and more of it will be documented. New developers will be able to quickly learn how your API works, and new code will be written more clearly as well.

The Dings

The above list is a list of things either I’ve been dinged on before while having my pull request reviewed, or things I’ll ding other developers for before I’ll merge theirs in. It’s not a list of demerits, they’re not errors you should be ashamed of, they’re just common pain points, things that should not be merged into a project. You shouldn’t feel bad about mentioning any of these, just as you shouldn’t feel bad if you’re “caught” for one of them either. But if the need arises, don’t be afraid to attach an animated gif summing up your feelings.

I’ve been grappling with this question for a while now, because I’m just full of ideas, new things I want to try, and I’m held back with the speed I write software at. I look back through the history of software, or of any technological development, and I realize: this need not be so!

We can write software faster, we’ve been doing it all along. Things that used to take long and arduous periods of time to write are done quicker now, to an extent. But I want to do better, even faster, and I want to improve everything along the way. I don’t want my ideas to be limited by unnecessary bottlenecks.

Why the hurry?

With a fulltime job and a fulltime personal life, I give myself about one hour per day to work on my own personal projects. I start my mornings by coding for an hour or so, on something I really care about before I head off to work (it’s a great habit to be in, by the way, because it starts the day off on a really positive note, leaving me energized for the rest of the day). Being limited to one hour per day forces me to be focussed on my work, too. Although one hour per day isn’t a lot of time to devote to all the projects I want to get through (at last count, I’ve got about 10 in the backlog), it’s really a hard limit for me now. So if I want to write software faster, something else has to give, and that’s got to be from the software itself.

Addition

The most obvious way to work faster is to build on the work of others, and for software developers the most obvious way to do that is to use components built by other developers. Open (or closed) source projects and objects created by other developers is a fast way to add new things to a project so that I don’t have to. I can depend on libraries written by others, and I can’t depend on others, but I’ve been at this long enough I’ve developed a keen eye for telling the good from the bad.

According to most developers, we’ve created a solution to allow for better sharing of code. We’ve got object oriented systems, and we’ve got repositories full of them strewn across the web. GitHub creates a social network around them, and package managers allow us to install them at a whim to our projects. I can’t help but feel these solutions all fall tremendously short, for reasons I’ll detail in a later article.

Suffice it to say, using components from other developers does improve how fast we can write software. It’s like a form of addition. Work on my project has been added to for free.

Multiplication

If adding components from other developers is like addition, it follows then that I can work even faster if I work with other developers. This is like multiplication, a collaboration where we collectively work harder because there are more of us doing the work.

I’d even consider something so superficially simple as bouncing ideas off another developer to be a form of collaboration, because it allows me to get outside of my own mind.

Derivation and Integration

The ways I’ve discussed so far are all helpful methods of building software faster, and I’ve been using them recently to great success, I know there is an even better way, in addition to those already mentioned. This way sort of transcends all the other ways and affects my ability to write software at all levels.

How do we really get faster at making software? Well we have to eliminate the bottlenecks and by far the biggest bottleneck for any software developer is thinking. Thinking is what takes up the vast majority of our time. I think there are two main kinds of thinking a programmer has to do, one good and one bad. We need to create new ways to shift the balance in the good direction:

Good thinking is thinking about the core problem trying to be solved. This means things like the overall problem, the algorithms needed to model a system, the intention of the user the developer is trying to meet, etc. these things are necessary to think about, they’re what people want to solve, but they are encumbered by…

Bad kinds of thinking is not about bad thoughts, just counterproductive ones. These are the “tricky bits” where the developer is forced to think about implementation problems, think about the minutiae of either the system or how to program it.

These are things like bugs that need to be understood and fixed, but also trying to reason in unfamiliar or unintuitive ways (for example, thinking in higher order dimensions, dealing with non-human scales, changing coordinate spaces), anything that causes a person to slow down and have to reason about something before they can continue onto their “good/real work.”

If code sharing is addition, and working with others is multiplication, then this becomes like a form of differential calculus.

I want us to contribute to tools that will minimize the amount of time we spend dealing with the rough kind of thinking. I want those sorts of details to be trivialized, so that the good kind of thinking becomes more natural. That’ll let us spend more time on the problem domain, which will open up all kinds of new ways of thinking.

I’m not exactly sure what those sorts of tools look like yet, but I have some hints to help me find them (and I’d love to hear how you think we can find them, too):

Any time you’ve got to draw something out on paper (geometry, visualizing data or program flow, etc.) is probably a good hint we could develop a tool to help reason about this problem better.

If it’s a common source of bugs for programmers, this is probably another candidate (off-by-one errors, array counting, regular expressions, sockets or other kinds of stream-bound data, etc.).

Any time where it takes a lot of tries and tweaks to get something just right. If you’ve spent too much time trying to tweak or visualize how an animation or graphic should look, this might be a great place to look for creating a tool for better reasoning about it.

I’ve got some preliminary work done on this, but nothing I’m ready to show off quite yet. In the meantime you might like to check out the Super Debugger, as a crude attempt until I’m ready.

These certainly aren’t the only pain points we software developers need help reasoning about, but it’s a start. And that’s just for software developers. I’ve completely left out everybody else. Every physicist, every teacher, every architect, every doctor, every novelist. These are all domains which could benefit greatly from new ways of reasoning to help them do their work better. But I think we should start with the problems we’re having before we can begin to help anyone else (hint: we shouldn’t be solving anybody’s problems, we should be giving them the tools to create their own way of thinking and reasoning better. It would be presumptuous to assume we software developers knew how to fix the world’s ills. But we can enable them to do it.).

It was a hot and smoggy sunny day in Brooklyn. Very hot. I mean it was like somebody had doused a planet with gasoline and then lit it on fire. That kind of hot. I stood in the middle of a sidewalk, beside one of the few lush parks in the city. I had stopped dead in my tracks because I just had to tweet something witty. I’m so witty.

So there I was just standing there, with my new (well, pretty new) black iPhone 5 held precariously in my hand. In my claw. The iPhone 5 was held precariously in my claw hand, because it’s kind of a little too big anyway, but I get an extra row of icons so that’s really nice. The glare from that fat old sun off my iPhone screen is almost unbearable (Unglarable? Perhaps I’ll draft that as a tweet for later). I’m holding the phone at around waist level, with my head tilted down. From behind, I know this pose looks quite a lot like a man using a urinal, but I figure since I’m standing like this in public nobody will probably care.

I’ve already made my tweet and I’m just standing there, feverishly pulling to refresh. Thank god for that gesture, I mean the iPhone 5’s screen is nice but if I had to reach all the way up to the top of that screen every time I wanted to make my Twitter feed refresh, my thumb would probably fall off. Not to mention the fact that the refresh buttons are almost always on the right hand of the screen. It’s kind of descriminatory to lefties like me, but the pull to refresh gesture is an equalizer. It’s a real innovation, really.

I’m pulling and refreshing because I just know the tweet I’ve made will set some people off, and I’d really like to know what they have to say back to me. The people who follow me on Twitter are witty like me too. You have to be, because you’ve just got to be focused if you want to make an impact on Twitter. This tweet will probably get me so many Favs, too. I’ll check my email, because I know Twitter will email me when someone retweets me now too. Nothing yet.

“What’s up there doods?” I hear from behind, clearly aimed at me, because it’s one of the phrases I use when talking to my close childhood friends (we’ll always like to poke fun at Metallica’s Lars Ulrich, who seems to have a good sense of humour). I don’t recognize the voice though, so I turn around.

Standing behind me is a young kid of of probably twelve years old. He’s standing in my shadow, so as my pupils adjust to the change in lighting, I conveniently start to see more of his traits. He’s skinny. So skinny (is he sick? is he eating enough?). OK. Not that skinny. I was like that as a kid, too. He’s got messy brown hair, kind of curly, but really just messy. The wind hasn’t even been blowing in Brooklyn today because it’s too hot for even that, so his hair must just be messy. He’s wearing round glasses on his somewhat broken out face, and a t-shirt with cartoon characters I don’t quite recognize (the writing on it looks Chinese, which I happen to not be able to read, but I have an app on my phone that’ll translate it for me). He’s got rather quite large Adidas on his feet. They look like worn out flippers because he clearly hasn’t grown into his feet just yet. They’re awesome sneakers, though.

Of course, all of these descriptions really happen as thoughts inside my head in less than 500 milliseconds and I have no idea how they actually work. I can’t conceive of my own brain.

“Uh, hey kid” I say with a half smile, because remember, I just turned around like a second ago, so it doesn’t seem like there was a weird gap or anything. “Not much I guess” I’m making conversation but I feel a little out of place. Grownups aren’t supposed to talk to strange kids, especially not near a park. “Do I know you from somewhere?”

“Yes you do. I’m you. You’re me” he says without beating around the bush. I find it kind of hard to believe, because if I were about to reveal that, I think I would have tried for a little more dramatic tension.

“Huh?”

“I’m you. I’m a younger version of you. I travelled through time to come talk to you.” He did look kind of familiar, now that I think about it. I don’t really believe him, but I’d just re-watched “Back to the Future” a few nights ago, and so time travel was still on my mind. I thought I’d humour him. It’d make for a great story, if nothing else. People tell me I’m good at telling stories.

“Well little Jason,” I say to myself, sounding more patronizing than I’d intended, “I don’t remember travelling through time when I was your age. Shouldn’t I have remembered travelling through time and meeting myself in the future?” I know a thing or two about the implications of time travel.

“Probably, but you don’t remember because you haven’t done it yet. I didn’t come from the past, I came from the future,” little Jason said. OK, that doesn’t make any sense, I thought.

“OK, that doesn’t make any sense,” I said. “You’re what, twelve right?”

“Eleven”

“OK, so if you’re me and you’re eleven and I’m me and I’m twenty-five, how did you come from the future?”

“Things don’t always make sense to the past,” he said. “Some things seem unreasonable to generations, but later we learn they’re wrong. And it’s hard to teach that to the past, but with time machines, it’s a little easier”

“Sure, but I still don’t see how that’s possible, even if you had a time machine.” I was more curious than incredulous.

“OK, let’s take an example they teach in kindergarten. You’ve got two metal balls of the same size, one weighs four pounds, the other weighs two. You drop them both at the same time from the same height and see which hits the ground first. In old times, people used to think the heavier ball would fall faster, because it sorta makes sense. They couldn’t understand how both would fall at the same rate”

“Right…”

“You could show them, but that really wouldn’t convince them. Believe me, I’ve tried. But here’s the trick. Here’s how you get them to understand something that seems impossible. And it doesn’t always work right away, and it’s not an easy trick, but here’s what you do. You don’t convince them of anything, but you instead get them to convince themselves that it’s true.”

“And how do you do that?”

“You take another two pound ball, and you tie it together with the other two pound ball. The tie weighs nothing, so now you have a new shape that weighs four pounds. It’s made of two, two-pound shapes. How could it possibly fall any slower than the other four-pound shape?”

“Wow. Hmmm. That’s a neat way to look at it.” OK. He had me there.

“But I didn’t come here to talk about balls, Jason,” said little Jason.

“Well so to convince people of the past of these seemingly impossible things, you’ve got to get them to convince themselves. Sure, that makes sense. But what still doesn’t make sense is how you came from the future and yet you’re younger than me. You haven’t got me to convince myself of that yet”

“That’s what I came to talk about,” he said with a grin. “I can’t do it yet. I can’t actually get you to convince yourself that I’m you and you’re me yet, because you need to invent it. The tools you need to reason in that way just don’t exist yet.”

“You’re telling me you travelled through time to convince me to make a tool to convince myself that you in fact did travel through time?” To say the least, I was little perplexed.

“Clear as mud?” His face told me he understood this clear as day. Meanwhile I understood this about as clear as the smoggy Brooklyn day.

“It seems like a pretty roundabout way to get things done.”

“Convincing yourself is just one implication of what you need to do. It’s a bonus, a result, but it’s not the goal you’re after. It’s not the goal we’re after.” he said to me. I said to me? “Don’t worry about the time travel details. Don’t worry about how I got here, or how old I am. That’s not important. What’s important is what you need to do. You can make things better.”

“Am I not doing that already?”

“You are, but you can be doing better. As generations go on, society as a whole learns more and more. They get smarter than those who came before them. They create new art, new forms and expressions, new tools to help them reason. These things extend our reach and let us think new thoughts we couldn’t possibly think before. They take the good and spread it around to everybody so everybody grows up in a better world.”

“This is pretty deep for an eleven year old. You sound like you know a lot.”

“Compared to the eleven year olds of today, I do. Because the time I live in, people have more tools at their disposal, they can think in more powerful ways, because they have tools to help them imagine new things. We’re running on the same brains you have here, but we’ve got help from our inventions. I need you to help invent those.”

“How do you suppose I do that? I’m don’t think I’m as enlightened as you seem to be, kid,” I said.

“And that’s my whole point. You’re not enlightened now, but you’ve got to start. Make a tool, reason better, so you can make a better tool, and reason even better. And so on. That’s how it goes. You start slowly, for now you’ll have to make ‘software’, I guess. It’s the best tool you have in your time.”

“I already do that. I make software for a living, you know.” It’s my job and I’m quite proud of it, I thought to myself. “And besides, software is for networking and photos and news and videos anyway,” I said.

“You’ve got the right skills but you’re doing terribly limited things with them, you know. Your software pulls information from one computer and shows it on a smaller computer, and sometimes it sends it in the other direction. Who is doing higher reasoning with that?” That hurt. “It’s like you’re a really great drawer, but all you draw are stick people. That’s great and there’s a time for that, but there’s so much more to the world. You could discover some of the really big things my time is based around. You can’t even see it yet. At all. But you can get there.”

I felt like I was going to collapse. Maybe it was the heat. Maybe it was what little Jason had just told me. Maybe it was what little Jason had just convinced me of. My phone buzzed furiously in my pocket.

“Leave your phone, it’ll wait. You can’t explain this in a tweet. You can’t explain this on your website, although I’m sure you’ll try. You won’t be able to explain it with your software, just quite yet, but you can get closer,” he said.

“Do you ever get the feeling like there’s something you’re missing? Like something standing right in front of you, but it’s invisible? You can feel the wind, but you can’t see it and you don’t know what the air is, but you know it’s there.” I could feel the air.

Little Jason smiled. “Chase that feeling. Humans figured out what the wind was. We built microsopes to see things our eyes couldn’t. And you can make tools to help us think things our brains couldn’t.”

My phone stopped buzzing.

“I’m young and this is the future I want to inherit” he said.

He said goodbye and hopped on his bicycle. His meager little legs pedalling with ease, he biked a lot faster than I could run.

How many visualizations of flight paths, languages on Twitter, Facebook friend networks, and votes in the Eurovision song contest does the world need? I’d argue that not that many as we see nowadays. On the other hand, how many graphics about inequality, poverty, education, violence, war, political corruption, science, the economy, the environment, etc., are worth publishing? I’ll leave the answer to you, but you can guess what mine is.

I know of no research to back me up on this, but my guess is that visualization designers are, on average, nerdier and more technophilic than your average Jane and Joe. When we have the freedom to choose what topics to cover, we tend to lean toward issues most people don’t care much about, but that we consider fun and cool. Besides, we tend to focus in areas in which data are easily available, and arranged in a neat way —Internet and social media usage are the obvious examples, but there are many others.

I’m not sinless, by the way. Between January and May 2013, I oversaw a visualization project by a Spanish student, Esteve Boix, which I’ve described in detail in my website. Its topic was Buffy, the Vampire Slayer, Joss Whedon’s geeky TV show. A portion of my soul —the one that remains stuck in a Dungeons&Dragons and comic book-filled adolescence— was enthralled. The other side —the adult, emotionally hardened one— wondered if the energy spent in timing the appearances of characters in the show and other trivial minutiae could not have been better spent in more worthwhile endeavors.

Preface

When I gave my talk at NSNorth 2013, An Educated Guess (PDF) about building better software, one of the points I stressed was about understanding people and working together with them in better ways. This means knowing where you are strong and weak, and where the people you work with are strong and weak, and acknowledging the collective goals you share: to make better software. This means knowing how to deal with other people in positive ways, giving criticism or making suggestions for the betterment of the product, not for your own ego.

I don’t think I expressed my points very clearly in the talk, so I’d like to take some time now and provide something of a more concrete example: dealing Github’s Pull Request feature. This will be a multi-part series where I describe ways to use the feature in better ways, with the end result being an improved product, and also an improved team.

This particular example deals specifically with iOS applications and while using GitHub’s Pull Requests (and assumes you’re already familiar with the general branch/merge model), but I hope you’ll see this could just as easily apply to other software platforms, and other forms of version control. This guide stems from my experiences at Shopify and at the New York Times and is more focused on using pull requests within a team, but most of this still applies if you’re doing open source pull requests to people not on your team.

Writing Great Pull Requests

Let’s say you’ve just completed work on some nasty bug (I’ll be using a bugfix as an example, but I’ll note where you might like to do things different for a feature branch), you’ve got it fixed on your branch and now you’re ready to get the branch merged into your team’s main branch. Let’s start from the top of what you should do to make a great pull request.

Check the diff

The first thing you’ll want to do before you even make your pull request is review the differences between your branch and the target branch (usually master or develop). You can do this with lots of different tools, but GitHub has this built-in as part of the “Branches” page where you can compare to the default branch. Even better, in the “New Repository View” they’re beginning to roll out, reviewing your changes is now part of the Pull Request creation process in the first place.

Here’s where you’re going to look for last minute issues before you present your work to a reviewer. I’ll be writing about exactly what you should be looking for in Volume 2 later on, but the basic gist is: when looking at the diff, put yourself in the shoes of the reviewer, and try to spot issues they might find, before the review even starts. This is sort of like proofreading your essay or checking your answers before handing in a test. If you could include animated gifs on your tests.

Cleanup any issues you spot here and push up those changes too (don’t worry, all changes on the branch will get added to your pull request no matter when you push them up).

Use a descriptive title

The title is the first thing your reviewer is going to see, so do your best to make it as descriptive as possible, as succinctly as possible. Don’t be terse and don’t be verbose, but give it a good, memorable title. Remember, depending on how the team works, the reviewer might have a lot on their mind, so making it easier to tell apart from other pull requests at a glance will make things easier for them to review yours.

Some teams assign ID numbers to all features and bugfixes from issue tracker “stories”, and if your branch has one associated with it, it’s also a good idea to include that somewhere in the title, too. This allows software integration between Github and your team’s issue tracker software so the two can be linked together. Also, including the ID number in your pull request title helps eliminate the chance of ambiguity. If the reviewer really isn’t sure which issue the branch relates to, they can always use the ID number to verify.

Finally, try to be explicit with the verbs you use in the title. If your branch fixes a bug, use the word “fixes” or “fixed” somewhere in the title. If it implements a feature, use the word “implements” or “adds”, etc. This tells the reviewer at a glance what kind of pull request they’re dealing with without even having to look inside. As a bonus, when using an ID number as discussed above, some issue trackers will automatically pick up on key words in your title. Saying “Fixes bug #1234” can cause integrated issue trackers to automatically close the appropriate bug in their system. Let the computer work for you!

The Description

The last and most important thing you need to do to write a great pull request is to provide a good description. For me, this involves three things (although I’m open to hearing more/others), loosely borrowed from a Lightning Talk given by John Duff at Shopify. In a good description, you need to tell the reviewer:

What the pull request does.

How to test it.

Any notes or caveats.

1. What it does

This is the most essential part of the pull request: describing what the changes do. This means providing a little background on the feature or bugfix. It also means explaining, in general terms, how the implementation works (e.g. “We discovered on older devices, we’d get an array-out-of-bounds exception” or “Adds a playlist mode to the video player by switching to AVQueuePlayer for multimedia”). The reviewer should still be able to tell what’s going on from the code, but it’s important to provide an overview of what the code does to support and explain it.

The real benefit of doing this is it gives your team a chance to learn something. It gives anyone on your team who’s reviewing the code a chance to learn about the issue you faced, how you figured it out, and how you implemented the feature or bugfix. Yes, all of that is in the code itself, but here you’ve just provided a natural language paragraph explaining it. You’ve now created a little artifact the team can refer to.

As an added bonus, when writing it up yourself, you’re also taking the opportunity to review your own assumptions about the branch, and this might reveal new things to you. You might realize you’ve fallen short on your implementation and you’ll be able to go back and fix it before anyone has even started reviewing it.

You’re the project’s mini-expert on this mini-topic, use this as a chance to let the whole team improve from it.

2. How to test.

Some development teams have their own dedicated QA teams, in which case providing testing steps aren’t usually as essential, because the QA team will have their own test plan. If your team does its own QA (as we did at Shopify) then it’s your responsibility to provide steps to test this branch. That includes:

The simplest possible way to see a working implementation of whatever you’re merging in.

If your branch fixes something visual in the application, it might be a good idea to provide some screenshots highlighting the changes. If your branch involves a specific set of data to work with, provide that too. Do what it takes to make it easier for the reviewer.

Of course, the reviewer should be diligent about testing this on their own anyway (in steps I’ll describe in Volume 2), but when you provide steps yourself, you’re again reviewing the work you’ve done and possibly recognizing things you’ve missed, cases you’ve overlooked that might still need work once a reviewer checks them over. This is another chance to remind yourself of strange test cases you might have not thought about.

3. Notes and Caveats

The last section doesn’t always need to be included, but it’s a good catch-all place to put extra details about the branch for those who are curious. It’s also a great place to explain remaining issues or todos with the pull request, or things this branch doesn’t solve yet.

If you’ve made larger changes to the project, this might be a good place to list some of the implications of this change, or the assumptions you’ve made when making the changes. This again gives the reviewer more clues as to what your thinking was, and how to better approach the review.

Assign a Reviewer

If it makes sense for your team, assign someone to review the pull request. When choosing a reviewer, try to find someone who should see the request. Who should see it? It depends. If you’re fixing an issue in a particularly tricky part of the application, try to get the person who knows that area best to review it. They’ll be able to provide a more critical eye and find edge cases you might not have thought of. If you’re changing something related to style (be it code style or visual style), assign the appropriate nerd on your team. If you’ve got an issue that would reveal and explain the internals of a part to someone who might not know them as well (like a newcomer on the team), assigning them the pull request will give them a good chance to learn.

This doesn’t necessarily mean assign it to the person who will give you the least strife, because sometimes strife is exactly what you need to improve yourself and the code (if the reviewer is giving you strife just for their own ego, that’s another story which I’ll discuss in Volume 2).

Remember, if the reviewer finds issues with your code, it’s not personal, it’s meant to make the project better.

Not always applicable things

Some of the suggestions I’ve made will seem like overkill for some pull requests, and in a way that’s a good thing. They’ll make less sense for smaller pull requests, where the changes are more granular or uninvolved — and these are really the best kinds of pull requests, because they change only small amounts of things at a time and are easier to review. But sometimes larger changes just can’t be avoided, so that’s where these suggestions make the most sense.

Building Better Software by Building Software Better

These are tips and guidelines; suggestions for how to make it easier for the people you collaborate with. Once you think of Pull Requests as really a form of communication between developers, then you see it as an opportunity to collaborate in better ways. It becomes less about passing/failing some social or technical test, and more about improving both the team and the project. It’s a chance for all parties involved to learn something new, and do so in a faster way. It’s not about ego, it’s about doing better collective work.

Errors have become something of a bad thing, but they need not be that way. Conceptually, an error should be a minor mistake or misjudgement, a simple slight slip-up, but usually hopefully nothing too serious in most cases.

But this is not the world errors live in, because they live in our world, and in our world, errors become something much more grave. Our world is the world of the human, and if you think about it, all errors really boil down to human error at some point. What should be treated as a common wrinkle to be casually flattened out is instead treated as a glaring issue, something alarming which someone needs to be alerted to. Yellow warning signs, boxes erupting from the screen to rub noses in errors, red squiggly underlines pointing out mistaken homophones and finger slips. It gets to the point where errors in the world of humans start to look an awful lot like getting a paper back from a particularly anal high school English teacher. Is it any wonder people are fearful of computers when all they do is evoke tremors of high school nightmares?

This depraved treatment of errors in software should come as no surprise to anyone familiar with current software development. Those who write software are forced to write it for an unforgiving computer, and they are tasked with the grueling edict to coerce every decision into a zero or a one, a yes or a no, a right or a wrong. Is it any wonder the software itself is reflexive of the computer it runs on?

Computer program writers are not the sole source of humanity’s maltreatment of errors, but they are amongst its most vicious of perpetrators, possibly due to a hypersensitivity to the likelihood of errors. A software developer knows the likelihood of errors is high, and an error is a commonplace and usually simple issue, and yet ironically so little seems to be done to actually fix the errors when they happen. Instead, the focus is on preventing the errors, which must be a fool’s errand, becase as we all know, a sufficient number of errors happen nonetheless.

Attempting to prevent errors is natural, but specious logic. Attempting to prevent errors seems natural, because from a very young age, we’re taught errors are bad. Errors are not something that should be corrected, but instead it is the making of errors that should be corrected — we’re taught we shouldn’t make them in the first place, when what we really should be taught is how to learn from them when they happen. Parents tell their children not to cry over spilled milk, but making a mess is a cause for aggravation. A teacher tells students everyone is smart in their own way, and yet those who aren’t smart at passing contrived tests feel bad at their errors. Preventing errors seems natural to us because we’ve had the fear of them driven into us, not because there’s actually anything inherently bad about them.

The notion that you’re trying to control the process and prevent error screws things up. We all know the saying it’s better to ask for forgiveness than permission. And everyone knows that, but I Think there is a corollary: if everyone is trying to prevent error, it screws things up. It’s better to fix problems than to prevent them. And the natural tendency for managers is to try and prevent error and over plan things.

Software developers are notorious time wasters when it comes to attempting to prevent errors. They’ll spend weeks trying to make the software perfect, provably perfect, all in the name of avoiding errors. They’ll throw and catch exceptions in a weak attempt of playing keepaway with an error and the user, but inevitably all balls get dropped. They’ll craft programming interfaces so flexible, the framework can reach around and scratch the backs of its own hand (this is called recursion). Abstract superclasses, Class factories, lightweight objects (whatever the hell those are), all in the name of some kind of misplaced mathematical purity never reached in the admittedly begrimed world of software development. These dances of the fingers ultimately come down to attempts at preventing errors in the system itself, but they too are in folly, because in the future, one of two things will happen:

The system will change, but the developers couldn’t have predicted in which ways, so all the preparations for preventing this implicit error were incorrect, and need to be fixed anyway.

The system will not change, and so all the preparations were in vain.

At first, it seems software developers treat software as though it were still groves and dots punched into pieces of paper, shipped off to be fed into the mouth of a husky mainframe in another country. Immutable, unmalleable and unchangeable program code, doomed to prevent only the errors its developers could predict. But at least punch cards are flexible. Instead, it seems more like the program code has been chiseled into stone. That’s it. You prevent some errors and punt the rest of them off to the user, to make them feel bad about it.

We deny these errors. We deny them and pass them off to different systems, computer or person. We treat errors as something shameful to deal with and something shameful to have caused. But errors are no big deal. Errors should be expected and be inherent in the design. From a debugging level, errors should be expected and presented to all levels of a development team so that they can always track them down quickly. From an organizational level, errors should be seen as a chance to infer new information about the organization’s strengths and weaknesses. From a user perspective, errors are a chance to explore something off the beaten path.

Errors allow for spontaneity and for exploration. Errors allow for that angular square you went to school with to loosen up and meet some new curves. Errors in DNA created you and me. How can software change if we embrace, instead of deny, errors in our systems?

Last year Lea Redmond and I co-designed a game/installation called Toy Chest for the SF Come Out and Play Festival. Originally we called the game Toy Fight, but found that this wasn’t putting people in an appropriately cooperative/improvisational frame of mind. The basic idea was to design a game (and installation for the exhibit) which would allow players to bring any toy they wanted to a playful contest.

The whimsical absurdity of Optimus Prime going head to head with My Little Pony motivated us, as did finding new ways to play with old toys, and meditating upon material culture. It was also a fun excuse to collaborate, since Chaim mostly makes screen based works, and Lea’s creations tend to be physical three dimensional things.

In 2004, Engelbart gave a video interview with Robert X Cringely, talking about how his ideas came to be, and more importantly what he set out to do. It paints Engelbart as a really good soul with an altruistic vision. And he just seems like the kindest person in the world, too.

Engelbart had an intent, a goal, a mission. He stated it clearly and in depth. He intended to augment human intellect. He intended to boost collective intelligence and enable knowledge workers to think in powerful new ways, to collectively solve urgent global problems.

The problem with saying that Engelbart “invented hypertext”, or “invented video conferencing”, is that you are attempting to make sense of the past using references to the present. “Hypertext” is a word that has a particular meaning for us today. By saying that Engelbart invented hypertext, you ascribe that meaning to Engelbart’s work.

Almost any time you interpret the past as “the present, but cruder”, you end up missing the point. But in the case of Engelbart, you miss the point in spectacular fashion.

Our hypertext is not the same as Engelbart’s hypertext, because it does not serve the same purpose. Our video conferencing is not the same as Engelbart’s video conferencing, because it does not serve the same purpose. They may look similar superficially, but they have different meanings. They are homophones, if you will.

Douglas Engelbart, best known as the inventor of the computer mouse, has died at age 88. During his lifetime, Engelbart made numerous groundbreaking contributions to the computing industry, paving the way for videoconferencing, hyperlinks, text editing, and other technologies we use daily.

Engelbart invented the Mouse, the Graphical User Interface, Video conferencing, and Hyperlinking. Before 1968. And we still haven’t caught up to most of his advancements today.

The 150th anniversary of the Battle of Gettysburg is upon us. The Civil War and Gettysburg remain one of the most integral and well-documented parts of American history. In hopes of honoring this extra special anniversary, here are ten little known anecdotes about the Battle of Gettysburg, found in the timeless and timely resource The Gettysburg Nobody Knows, an essay collection edited by Gabor S. Boritt.

A More Comprehensive Google Reader Archive. So it turns out the “Google Takeout” service for Reader doesn’t include everything, but this GitHub project appears to be comprehensive. If you’re really serious about your Reader data, give this a run before July 1 2013.

Bieber’s Boards and Cords. If anyone in the Fredericton area is seeking woodwork, firewood, milled wood or tree removal, Donny Bieber is your man. Be one of the first to see his newly-launched website, designed by me.

Like most fields right now, building design and construction has never before had so much data available, and such an uneven distribution of skills and tools that might let us make sense of it and free our thinking for higher-order problems. Even if it were tenable now (i.e., it isn’t), is only becoming less so to waste brainpower on tedium better handled by these infernal yet stupendously amazing machines.

Ryan knows what’s up. If you need someone for a project involving buildings or building things, Ryan is your man.

The first real-world demo of Google Glass’s user interface made me laugh out loud. Forget the tiny touchpad on your temples you’ll be fussing with, or the constant “OK Glass” utterances-to-nobody: the supposedly subtle “gestural” interaction they came up with–snapping your chin upwards to activate the glasses, in a kind of twitchy, tech-augmented version of the “bro nod”–made the guy look like he was operating his own body like a crude marionette. The most “intuitive” thing we know how to do–move our own bodies–reduced to an awkward, device-mediated pantomime: this is “getting technology out of the way”? […]

The assumption driving these kinds of design speculations is that if you embed the interface–the control surface for a technology–into our own bodily envelope, that interface will “disappear”: the technology will cease to be a separate “thing” and simply become part of that envelope. The trouble is that unlike technology, your body isn’t something you “interface” with in the first place. You’re not a little homunculus “in” your body, “driving” it around, looking out Terminator-style “through” your eyes. Your body isn’t a tool for delivering your experience: it is your experience. Merging the body with a technological control surface doesn’t magically transform the act of manipulating that surface into bodily experience. I’m not a cyborg (yet) so I can’t be sure, but I suspect the effect is more the opposite: alienating you from the direct bodily experiences you already have by turning them into technological interfaces to be manipulated.

I feel the same way for interfaces like Kinnect or the Oculus Rift. Waving my arms around in the air with no notion of feedback is terribly unintuitive. Nothing in the real world works that way, and it completely ignores all the virtues of human arm and hand. Minority Report and Google Glass might look cool, but they’re farcical at best, and counter-productive to making computers better to use at worst.

You, hear me! Give this fire to that old man. Pull the black worm off the bark and give it to the mother. And no spitting in the ashes!

It’s an odd little speech. But if you went back 15,000 years and spoke these words to hunter-gatherers in Asia in any one of hundreds of modern languages, there is a chance they would understand at least some of what you were saying.

That’s because all of the nouns, verbs, adjectives and adverbs in the four sentences are words that have descended largely unchanged from a language that died out as the glaciers retreated at the end of the last Ice Age. Those few words mean the same thing, and sound almost the same, as they did then.

When I left Uganda this winter I had finally broken the 300-page barrier in David Foster Wallace’s gargantuan novel, Infinite Jest. I’ve started it three or four times in the past and aborted each time for attentional reasons. But 300 pages felt like enough momentum, finally, to finish. Then I hit my first American airport, with its 4G and free wi-fi. All at once, my gadgets came alive: pinging and alerting and vibrating excitedly. And even better, all seven seasons of The West Wing had providentially appeared on Netflix Instant. I’ve only finished 100 more pages in the two months since.

I always binge on media when I’m in America. But this time it feels different. Media feels encroaching, circling, kind of predatory. It feels like it’s bingeing back.

The basic currency of consumer media companies—Netflix, Hulu, YouTube, NBC, Fox News, Facebook, Pinterest, etc.—is hours of attention, our attention. They want our eyeballs focused on their content as often as possible and for as many hours as possible, mostly to sell bits of those hours to advertisers or to pitch our enjoyment to investors. And they’re getting better at it, this catch-the-eyeball game.

Consider Netflix. These days, when one episode of The West Wing ends, with its irresistible moralistic tingle, I don’t even have to click a button to watch the next one. The freshly rolling credits migrate to the top-left corner of the browser tab, and below to the right a box with a new episode appears, queued up and just itching to be watched. Fifteen seconds later the new episode starts playing, before the credits on the current episode even finish. They rolled out this handy feature—they call it Post-Play—last August. Now all I have to do is nothing and moralistic tingle keeps coming.

All the media companies are missing is “Achievements” and we’ll be in full-blown dystopia.

If the federal government can’t even count how many laws there are, what chance does an individual have of being certain that they are not acting in violation of one of them? […]

Over the past year, there have been a number of headline-grabbing legal changes in the US, such as the legalization of marijuana in CO and WA, as well as the legalization of same-sex marriage in a growing number of US states.

As a majority of people in these states apparently favor these changes, advocates for the US democratic process cite these legal victories as examples of how the system can provide real freedoms to those who engage with it through lawful means. And it’s true, the bills did pass.

What’s often overlooked, however, is that these legal victories would probably not have been possible without the ability to break the law.

The state of Minnesota, for instance, legalized same-sex marriage this year, but sodomy laws had effectively made homosexuality itself completely illegal in that state until 2001. Likewise, before the recent changes making marijuana legal for personal use in WA and CO, it was obviously not legal for personal use.

Advanced Alien Civilization Discovers Uninhabitable Planet. According to scientists from the advanced alien civilization, despite possessing liquid water and a position just the right distance from its sun, the bluish-green terrestrial planet they have named RP-26 cannot sustain life due to its eroding landmasses, rapidly thinning atmosphere, and increasingly harsh climate.

“Theoretically, this place ought to be perfect,” leading Terxus astrobiologist Dr. Srin Xanarth said of the reportedly blighted planet located at the edge of a spiral arm in the Milky Way galaxy. “When our long-range satellites first picked it up, we honestly thought we’d hit the jackpot. We just assumed it would be a lush, green world filled with abundant natural resources. But unfortunately, its damaged biosphere makes it wholly unsuitable for living creatures of any kind.”

“It’s basically a dead planet,” she added. “We give it another 200 years, tops.”

At a seminar in the Bell Communications Research Colloquia Series, Dr. Richard W. Hamming, a Professor at the Naval Postgraduate School in Monterey, California and a retired Bell Labs scientist, gave a very interesting and stimulating talk, ‘You and Your Research’ to an overflow audience of some 200 Bellcore staff members and visitors at the Morris Research and Engineering Center on March 7, 1986. This talk centered on Hamming’s observations and research on the question “Why do so few scientists make significant contributions and so many are forgotten in the long run?”

From his more than forty years of experience, thirty of which were at Bell Laboratories, he has made a number of direct observations, asked very pointed questions of scientists about what, how, and why they did things, studied the lives of great scientists and great contributions, and has done introspection and studied theories of creativity. The talk is about what he has learned in terms of the properties of the individual scientists, their abilities, traits, working habits, attitudes, and philosophy.

I recently read the linked transcript of this talk and thought Hamming gave some good insight into his process, especially the bits about approaching problems from new directions to make them surmountable. Here are some of my favourite bits, but I really encourage you to read through the whole thing:

What Bode was saying was this: “Knowledge and productivity are like compound interest.” Given two people of approximately the same ability and one person who works ten percent more than the other, the latter will more than twice outproduce the former. The more you know, the more you learn; the more you learn, the more you can do; the more you can do, the more the opportunity - it is very much like compound interest. I don’t want to give you a rate, but it is a very high rate. Given two people with exactly the same ability, the one person who manages day in and day out to get in one more hour of thinking will be tremendously more productive over a lifetime.

[…]

There’s another trait on the side which I want to talk about; that trait is ambiguity. It took me a while to discover its importance. Most people like to believe something is or is not true. Great scientists tolerate ambiguity very well. They believe the theory enough to go ahead; they doubt it enough to notice the errors and faults so they can step forward and create the new replacement theory. If you believe too much you’ll never notice the flaws; if you doubt too much you won’t get started. It requires a lovely balance.

I think this one is really super important, given the amount of short bursts of time where we distract ourselves with phones and such:

Everybody who has studied creativity is driven finally to saying, “creativity comes out of your subconscious.” Somehow, suddenly, there it is. It just appears. Well, we know very little about the subconscious; but one thing you are pretty well aware of is that your dreams also come out of your subconscious. And you’re aware your dreams are, to a fair extent, a reworking of the experiences of the day. If you are deeply immersed and committed to a topic, day after day after day, your subconscious has nothing to do but work on your problem. And so you wake up one morning, or on some afternoon, and there’s the answer. For those who don’t get committed to their current problem, the subconscious goofs off on other things and doesn’t produce the big result.

On computers specifically:

“How will computers change science?” For example, I came up with the observation at that time that nine out of ten experiments were done in the lab and one in ten on the computer. I made a remark to the vice presidents one time, that it would be reversed, i.e. nine out of ten experiments would be done on the computer and one in ten in the lab. They knew I was a crazy mathematician and had no sense of reality. I knew they were wrong and they’ve been proved wrong while I have been proved right. They built laboratories when they didn’t need them. I saw that computers were transforming science because I spent a lot of time asking “What will be the impact of computers on science and how can I change it?” I asked myself, “How is it going to change Bell Labs?” I remarked one time, in the same address, that more than one-half of the people at Bell Labs will be interacting closely with computing machines before I leave. Well, you all have terminals now. I thought hard about where was my field going, where were the opportunities, and what were the important things to do. Let me go there so there is a chance I can do important things.

On doing great work:

You should do your job in such a fashion that others can build on top of it, so they will indeed say, “Yes, I’ve stood on so and so’s shoulders and I saw further.” The essence of science is cumulative. By changing a problem slightly you can often do great work rather than merely good work. Instead of attacking isolated problems, I made the resolution that I would never again solve an isolated problem except as characteristic of a class.

And finally:

Let me tell you what infinite knowledge is. Since from the time of Newton to now, we have come close to doubling knowledge every 17 years, more or less. And we cope with that, essentially, by specialization. In the next 340 years at that rate, there will be 20 doublings, i.e. a million, and there will be a million fields of specialty for every one field now. It isn’t going to happen. The present growth of knowledge will choke itself off until we get different tools. I believe that books which try to digest, coordinate, get rid of the duplication, get rid of the less fruitful methods and present the underlying ideas clearly of what we know now, will be the things the future generations will value.

Of course it’s my bias, but I see this as being solved in the medium of the computer. I don’t know how, but if a computer isn’t going to knock down this mental wall, it’s at least going to be giving us cracks to wedge in on.

I keep finding myself thinking about this new Mac Pro. It’s not that I’m lusting after speed of the memory, SSD, CPU, or GPUs for what I do. The more I think about this new Mac Pro the more I find myself wanting to write software for it. To me it’s become the most interesting piece of new hardware since the original iPad. Well, maybe Retina in the iPhone 4 but that didn’t present an entire new class of problems to think about. […]

I keep finding myself thinking up applications for all that compute power and dreaming up what kind of software I could write to take advantage of it.

I think these are the kinds of thoughts developers should be having more often. When a new device comes out, instead of thinking of how the device can improve what we already do, instead try to think of what altogether new sorts of software this device enables.

The difference between 2 images per second and 20 images per second might only be one order of magnitude, but the experience is wholly different because now humans perceive it as motion.

What’s the software equivalent of motion with these new Mac Pros?

Self: The Movie. Here’s a twenty minute video from 1995 recorded by Sun Microsystems about the Self programming language. I’d heard of the language before, but didn’t really know much about it until today.

It’s a very interesting take on a programming environment, and it reminds me a bit of Smalltalk, except taken to another level. A great introduction to thinking of programming environments in different ways.

I’ve been running Speed of Light since early 2010, and it continues to be a more rewarding experience with every passing day. I’ve used it as a place to explore my thoughts and expand my writing abilities, and I’ve made some friends in the process. Though my readership is modest, it is also thoughtful (and handsome).

There seems to be a trend when someone such as me has a website such as I do reaches a point in their website writing or readership when a decision is made to start making profit. This is not a bad thing, and many writers with numerous readers make this decision to go at least semi-pro with their writing.

I don’t think this is the right path for me, first and formost because I wouldn’t know where to begin without making it shitty and ensuring failure. I just don’t have enough readers to attract any kind of moderate money from this website, and nor do I want to resort to tactics to try and coerce such a readership. My articles may not make the rounds on Twitter or make it to the top (or even bottom) of Hacker News, and I may not get errant clicks because of the inflammatory or salacious headlines I choose not to write, but that’s something I’m damn proud of and I don’t intend on changing.

So instead of trying to convince myself of a goal I really don’t want, and then inflict upon my readers tactics to achieve that goal, I want to try something different.

I’m not taking this website full time, and I’m not even taking it semi-pro. I’m not adding advertising or sponsorships and I’m not adding a membership. In fact, you could have skipped this altogether and not seen any change, and that’s OK with me.

Enter the Tip Jar

What I’m doing is basically putting out a tip jar, nothing more and nothing less. It’s a way for people who like what I already write to encourage me to write more in the same vein. Here’s how it works:

I’m not trying to sway anyone to do this, I’m just trying this as an experiment to let you do it, if you’d like. The items on my wishlist tend to be about the topics I like to write about, so it’s a way for you, the reader, to encourage (or thank) me to write more like I do. Or maybe to expand my horizons. Or maybe you think something on the list is particularly awesome (the order of the items on the list shouldn’t indicate preference, I had to transfer them all over recently from an older Amazon account).

So there it is. My wishlist, it’s a way for you give me a tip if you so choose, or a way for you to just look at my interests if not.

Here’s to many more years of Speed of Light!

Planet Zoo (2010). Anthony Doerr, writing about what we all collectively do to the planet, and what we all collectively don’t do to help it:

In most American feedlots, beef cattle live their lives standing in or near their own manure. E. coli O157:H7—often found in cow feces—infects about 70,000 Americans a year and kills about 52. Undercooked or raw hamburger has been implicated in many of the documented outbreaks.

What has been our solution? Take the cows out of their own shit? Not quite. Instead we’ve decided to ramp up the antibiotics and treat ground beef with ammonia-drenched filler. We love technological fixes that allow us to preserve our existing systems. Professional football players are getting too many concussions. What’s our solution? Lobby for better helmets. Cheap calories are producing heart disease in too many Americans. What’s our solution? Give people anti-cholesterol statins that may be linked to anxiety and depression.

Look, I wouldn’t trade the 21st century for any other. We have toilet paper and vitamin-fortified milk and a measles vaccine. We can buy avocados in Fairbanks in January. But sometimes, particularly in the United States, we tend to put too much faith into the transformative powers of technology. Is progress really a curve that sweeps perpetually, unfailingly higher? Wasn’t toy-making or winemaking or milk-making or cheese-making or cement-making sometimes performed with more skill 300 or 700 or 1,900 years ago? I think of a tour guide I once overheard in the Roman Forum. She pointed with the tip of a folded umbrella at an excavation and said, “Notice how the masonry gets better the earlier we go.”

Later:

There’s mercury on our mountaintops and antidepressants in our groundwater. Earthworms in American farm fields have been found to have caffeine, household disinfectant, and Prozac in them. Scientists have found antibiotic-resistant genes in 14 percent of the E. coli in the Great Lakes. Maybe even more astounding, they’ve found antibiotic-resistant E. coli in French Guiana, in the intestines of Wayampi Indians—people who have never taken antibiotics.4

With every year that passes, Earth becomes a little more like a gorgeous, huge, and mismanaged zoo. Is it really relevant anymore to argue that one thing is natural while another thing is not?

How can you ensure that a viewing keeps the Vader reveal a surprise, while introducing young Anakin before the end of Return of the Jedi?

Simple, watch them in this order: IV, V, I, II, III, VI.

George Lucas believes that Star Wars is the story of Anakin Skywalker, but it is not. The prequels, which establish his character, are so poor at being character-driven that, if the series is about Anakin, the entire series is a failure. Anakin is not a relatable character, Luke is.

This alternative order (which a commenter has pointed out is called Ernst Rister order) inserts the prequel trilogy into the middle, allowing the series to end on the sensible ending point (the destruction of the Empire) while still beginning with Luke’s journey.

Effectively, this order keeps the story Luke’s tale. Just when Luke is left with the burning question “how did my father become Darth Vader?” we take an extended flashback to explain exactly how. Once we understand how his father turned to the dark side, we go back to the main storyline and see how Luke is able to rescue him from it and salvage the good in him.

Also, like the story itself, there’s a twist:

Next time you want to introduce someone to Star Wars for the first time, watch the films with them in this order: IV, V, II, III, VI

Notice something? Yeah, Episode I is gone.
While I don’t think the “Hayden’s Anakin’s ghost at the end of ROTJ” is a real problem, Rod Hilton makes a great argument for this order of viewing. It’s at least worth reading through his argument, if you’re a Star Wars nerd yourself.

The original trilogy gave us a window to one of the most interesting universes ever created. It is a fantasy, filled with amazing aliens and wonderful characters. The Phantom Menace was the film that could expand on this universe. We could have gotten more fantastical worlds, more foundation to the mythos. What we instead got was Tatooine……again. It could have traveled the entire universe but it instead gave us a planet we already know. This is but a tiny example of the unbelievable unimaginative feel this film has. This is most present in its plot. We start out this film with a dispute about tax regulations. Really? What a hook! I’m riveted!

“What is happening?” he said, staring blankly at the screen. “I don’t get it. What is this?” There was no one in the apartment to answer him. “These episodes feel so different! Why isn’t this more like what I remember? Why aren’t I laughing?”

The cop wanted the show to be as funny as he remembered. He wanted it to reveal itself more openly. He wanted it to make him laugh and not make him wonder what was going on. He slammed his Macbook Air shut and stomped around his apartment. He wanted to tweet about his frustration, maybe see if other people shared his feelings, but he didn’t want to be accused of spoiling season four for other people who hadn’t begun watching it yet.

“I’m so angry!” he said. He stopped and stood very still in the middle of his apartment. “Ugh! So mad!” he said. “I feel like I could …” his mind scanned every file in its memory for the aptest word, the metaphor, the action that would properly convey the feelings he was experiencing. “I feel like I could slap a vagina,” he thought.

Well written satire making me cringe all kinds about myself.

Feeling Democratic. Ryan McCuaig on what’s actually the big deal about the new iOS UI (hint: it’s not the icons):

The frameworks for providing parallax effects based on the gyroscope and adding physics to enhance the illusion that real things are being manipulated are incredible. Motion effects and dynamics are now very easy to apply and play with, which democratizes them and makes it possible for the less technically-inclined among us to participate in building up the relatively uncharted design language around them.

The closest comparison I can think of is the effect the LaserWriter had on print design. I expect the same period of taking things way too far and backing off. But the LaserWriter completely transformed print design. I expect the same here.

Your First iOS App Book. If you’re following WWDC and wishing to build your own apps, check out my friend Ash’s new book. He put a ton of work into it, and it’s well done.

Or: Jason Brennan gets sentimental about discovering his Canadian pride only after become an expat.

After spending the first twenty-four-and-a-half years of my life living obliviously as a Canadian, in January 2013 I moved to the United States to live with my girlfriend in New York City. My time here has been nothing short of fantastic, but it wasn’t until I got here that I realized what my home country meant to me.

This is not the simple story of “you don’t know what you’ve got till it’s gone”, although there are some aspects of that. But I think the real crux of it is the culture of my home country was so subliminal to me, being immersed in it, that I didn’t even realize it existed until it was missing.

Growing up in the Maritimes I was submerged in — drenched by — CanCon, the CRTC (Canada’s version of the FCC, essentially) mandate to play at least 30 per cent Canadian content on the airwaves. This meant Canadian artists were given a government-backed promotion on the radio and television, to give them a fighting chance against the seemingly limitless American music industry. For someone developing a musical taste in the early 2000s, this meant hearing lots of Rush, The Guess Who, and The Tragically Hip (the latter I specifically loathed). But what I didn’t realise was, mandated or not, these artists (and many others, like 54-40, I Mother Earth, Our Lady Peace, Bare Naked Ladies, hell, even Nickleback) were indeed infused into Canadian culture, a culture in which I was unknowingly a member. To me, they were just overplayed bands, constant requests on the 105.3 FM (“The Fox”)’s “Drive At Five” request line, repetitive and unoriginal staples and traffic-jam anthems.

Almost always, the songs were written about subjects I could barely relate to, whether they were from the vast Canadian Prairies I’ve never seen, or even the coastal songs of choppy Atlantic waters. From the cushy every-town valley that is Fredericton, New Brunswick, I found little to relate to with the rest of Canada’s geography or the songs about her. What I never realised, however, was there was something subtle amidst. There was an effect I could not smell or taste or hear, and that was although I could never relate precisely to any of these stories, I could relate to them as a Canadian on some level. I could relate on the level of being aware and surrounded by diversity of all levels. Diversity of geography and of heritage and of language and of politic. These are some of the truisms of Canadian culture, and they were utterly invisible to me living in Canada.

Since leaving Canada, however, these things became almost instantly and painfully apparent to me. Even though I could never relate to the Prairies, they were at least a part of my culture, even if the culture of New Brunswick was “we don’t get the Prairies”. And now, I’m here and that’s sorely lacking. There’s a hole where a misunderstanding of my Canadian culture used to be. Now there’s nothing and I’m choosing to identify that as pride.

You can pry my u’s from my cold dead fingers. YYZ is a second Canadian anthem, and it’s pronounced zed, not zee, zed. I pretend to understand Marshall MacLuhan, the Tragically Hip’s lyrics, and Canada’s foreign policy. My name is Jason Brennan, and I Am Canadian.

Yesterday I published an article asking Cocoa developers to rethink a common “Singleton” pattern and to improve it for our sanity:

I recommend hiding the sharedWhatever away from clients of your API. You can still just as easily have a shared, static instance of your class, but there’s no need for that to be public. Instead, give your class’s consumers class methods to work with.

I received three kinds of feedback for this article, the first kind being agreement, and that’s all there is to say about that.

The second theme was “It’s not the convention in Cocoa”. I think the reason for this is most of the time, developers are confusing a similar but different pattern used in Apple’s frameworks. The most common example is NSUserDefaults:

This looks a lot like a singleton but it isn’t. It’s just a way to access the standardUserDefaults, a pre-made object which your app will likely want to interact with. But it in no way implies or means you can’t create your own. The same pattern applies for other classes like NSNotificationCenter and NSFileManager to name but a few.

The third bit of feedback is where I’m a bit foggy, and that’s about the testability of hiding the shared object. I don’t do unit testing very often, but when I do I haven’t run in to any issues. From a fundamental point of view, I don’t understand why hiding the shared object should make testing any more difficult (I’m not being coy or shitty, I legitimately just don’t know). As far as I can tell, you’ll still be testing the public interface of your class, and that should be enough. But if I’m missing something (and this is entirely likely) then I’d love to know about it. Write about it on your website or email me.

Engineers See a Path Out of Green Card Limbo. It seems to me like if a foreign student trains at a US school, then it would make sense for the US to allow that student to work freely in the country, instead of incentivizing them to leave for another country.

Here’s a common pattern I see all the time in Cocoa development involving Singletons (let’s put aside any judgement as to whether or not the Singleton pattern is a good one and just roll with this for a moment): the singleton class Thing exposes a class method called +sharedThing which returns a shared instance of the class, and then has a bunch of instance methods to do real work. Here’s what the interface might look like:

Every time I want to do something with the singleton, I’ve got to first request it from the class, then I send that instance a message. It’s straightforward enough, but it gets tedious real quick, and it begins to feel like a part of the implementation is leaking out.

When I use a singleton class, I shouldn’t really have to care about the actual instance. That’s an implementation detail and I should just treat the whole class as a monolithic object. I’m sending a message to the House itself, I don’t care what houseling lives inside.

So instead, I’d recommend hiding the sharedWhatever away from clients of your API. You can still just as easily have a shared, static instance of your class, but there’s no need for that to be public. Instead, give your class’s consumers class methods to work with:

If your singleton class needs to store some state (and please try really hard to avoid storing global state), you can still use private properties (via the Class Extension) and expose necessary ones as class methods, too. Exposing global state this way is a bit more work, but doing this work is kind of a natural immune response of the language to discourage you from doing so, anyway.

Sometimes singletons are a necessary evil, but that doesn’t mean they necessarily have to be unpleasant. Hiding away the implementation detail of a “shared instance” frees other programmers from having to know about the internals of your class, and it prevents them from doing repetitive, unnecessary typing.

When using gesture recognizers, it is almost always far, far better to use UIPanGestureRecognizer than UISwipeGestureRecognizer because it provides callbacks as the gesture takes place instead of after it is said and done.

“Allo?” is my first solo show in London, at Kemistry gallery Intrigued by everyday life and human interaction, Allo? explores our social and asocial behaviours, the relationship between people and how we communicate with one another.

Twice a day, seven days a week, a tractor trailer carrying 8,000 gallons of watery, cloudy slop rolls past the bucolic countryside, finally arriving at Neil Rejman’s dairy farm in upstate New York. The trucks are coming from the Chobani plant two hours east of Rejman’s Sunnyside Farms, and they’re hauling a distinctive byproduct of the Greek yogurt making process—acid whey.

This isn’t just the most beautiful farming related websites I’ve ever seen, but also one of the most beautiful websites I’ve seen period.

I’ve never met a stronger person. She has lasted through doses of poison that would’ve easily killed any one of us “healthy” people, and she has done so with a degree of poise that is truly unfathomable. In our little startup world, the words tenacity and perseverance are thrown around a lot, but in that context they seem hollow and largely meaningless. Tenacity is far more than simply making it through tough times, and it’s not just a matter of finding a way “back to good.” Kristie has shown me that tenacity comes from living for a purpose, from believing in something so fully that it keeps you alive through six rounds of injecting drain cleaner into your veins. By that definition, I haven’t seen much tenacity in the Silicon bubble many of us call home. […]

I’m doing this because I believe that this is the greatest contribution I can make.

I could’ve become a doctor. All signs pointed to me likely being a very good one. In doing so, I would have gone to work and done my best to save lives every day. In that context, how is some programming environment a greater contribution to the world? Truthfully, it wouldn’t be if I just set out to build an IDE. But that’s not what I did - Light Table is just a vehicle for the real goal. While an IDE probably won’t directly save someone’s life, the things people are able to build with it could do exactly that. My goal is to empower others, to give people the tools they need to shape our lives. Instead of becoming a doctor, I have an opportunity to improve an industry that is unquestionably a part of the future of all fields. Software is eating the world and analytical work is at the core of advances in medicine, hard science, hardware… Human innovation throughout history has been driven by new tools that enable us to see and interact with our mediums in a different way. I’m not building an IDE, I’m trying to provide a better foundation for improving the world.

It’s something I think about a lot too, although thankfully not under such tragic circumstances. But it’s important for every software developer to consider what impact they’re having on the world. It’s important to consider if what I’m doing is making the best contribution to the world, or am I just following trends and making a buck.

I’ll probably never write software for medical patients, and I’ll probably never write software which lands a rocket full of people on Mars. But if I can write software that helps someone who will do those things, then I will have done my job. If I can enable a scientist or a researcher or even enable a child to express creativity or ideas more clearly, then I will have made my contribution.

In a blog post, the meteorologist Dr. Jeff Masters talks about the largest US wildfires of 2012. Masters mentions that the largest fire burned about 300,000 acres before it was contained. I have no idea how much 300,000 acres is or what types of things are similar sizes and I suspect few other people do, either. But we need to understand this number to answer the obvious question: how much of the United States was on fire? This is why I made Dictionary of Numbers.

I noticed that my friends who were good at math generally rely on “landmark quantities”, quantities they know by heart because they relate to them in human terms. They know, for example, that there are about 315 million people in the United States and that the most damaging Atlantic hurricanes cost anywhere from $20 billion to $100 billion. When they explain things to me, they use these numbers to give me a better sense of context about the subject, turning abstract numbers into something more concrete.

When I realized they were doing this, I thought this process could be automated, that perhaps through contextual descriptions people could become more familiar with quantities and begin evaluating and reasoning about them. There are many ways of approaching this problem, but given that most of the words we read are probably inside web browsers. I decided to build a Chrome extension that inserts human explanations of numbers into web pages.

In design this means “draw what you know.” Start by putting down what you already know and already understand. If you are designing a chair, for example, you know that humans are of predictable height. The seat height, the angle of repose, and the loading requirements can at least be approximated. So draw them. Most students panic when faced with something they do not know and cannot control. Forget about it. Begin at the beginning. Then work on each unknown, solving and removing them one at a time. It is the most important rule of design. In Zen it is expressed as “Be where you are.” It works.

Getting something “onto the paper” is an under-appreciated tool.

9. It all comes down to output.

No matter how cool your computer rendering is, no matter how brilliant your essay is, no matter how fabulous your whatever is, if you can’t output it, distribute it, and make it known, it basically doesn’t exist. Orient yourself to output. Schedule output. Output, output, output. Show Me The Output.

I’ve got two thoughts on this:

Dissemination trumps innovation nearly every time. You might have invented the greatest thing ever, but if someone else can get out their lesser invention to more people, it’s going to beat you out. I don’t think this is really what the above quote is referring to, but it reminded me of this.

Get in the habit of regular “releases”, whether this is actually releasing your product, or just checkpoints, or even just having a weekly or daily structure. Aim for completion on this schedule and get in the habit of getting something “out”.

Last week, I released a talk called Drawing Dynamic Visualizations, which shows a tool for creating data-driven graphics through direct-manipulation drawing.

I expect to write a full research report at some point (at which I’ll make the research prototype available as well). In the meantime, here is a quick and informal note about some aspects of the tool which were not addressed in the talk.

This book is incredibly empowering, but also terrifying in that Sheryl confirms the vast majority of my fears in my career. It’s frightening because to have my fears enumerated and validated by such a successful woman, along with an equal amount of incredible advice for combating these concerns and succeeding in our chosen careers leaves little reason to not confront them head-on. She confirms the ramifications of female success that are easy to imagine for any woman who was bullied for good grades in school or who has ever watched a comedy movie about a working woman trying to ‘have it all’. She confirms that success for women will make us less likeable, and that we underestimate ourselves, and that we pass on opportunities that men with the same skills would seize. Read this book and Sheryl Sandberg will effectively deny you of the option to let your fears control any of future decision making.

This sounds like mandatory reading for people of any gender in our industry.

A little while back my friend Charles Perry and I decided to try our hand at putting together a podcast. While we’re fully aware there are lots of great tech podcasts out there vying for your precious listening time, we thought together we could offer our own spin on things and add a bit more to the conversations going on in the independent iOS and Mac development communities.

I’m a big believer in giving back to the community in any way I can. While my occasional rants on this blog are one of my favorite ways to do that, I also thought maybe it was time to start using my physical voice as well as my internal one. Plus, having a discussion with another developer who might actually disagree with me on occasion could certainly be interesting and beneficial to shaping my views. Charles is a really smart, opinionated guy, so hashing out these topics with him made perfect sense to me.

In the first episode they discuss tech conferences, and I was nodding my head in agreement the whole time.

There is one gigantic problem with programming today, a problem so large that it dwarfs all others. Yet it is a problem that almost no one is willing to admit, much less talk about.

[…]

Too goddamn much crap to learn! A competent programmer must master an absurd quantity of knowledge. If you print it all out it must be hundreds of thousands of pages of documentation. It is inhuman: only freaks like myself can absorb such mind boggling quantities of information and still function. I wager that compared to other engineering disciplines, programmers must master at least ten times as much knowledge to attain competence.

I agree. There are so many things you have to learn in order to get anything “on the page” for any kind of programming. The thought of teaching any of my non-programmer friends or relatives how to write even a simple iPhone app gives me a shudder. There are so many necessary parts to deal with before any real work can be done.

Thankfully, there are some other languages which involve significantly less up-front cost to get something onto the page, but in order for a newcomer to understand what they can put on the page, they’re still limited by needing to look it all up.

Jonathan suggests how to fix this:

By far the most effective thing we can do to improve programming is: Shrink the stack!

I am talking about the whole stack of knowledge we must master from the day we start programming. The best and perhaps only way to make programming easier is to dramatically lower the learning curve.

[…]

To shrink the stack we will have to throw shit away.

I agree we need to lower the learning curve by requiring less of newcomers to get started, but I don’t think this comes by eliminating things necessarily. I don’t think he’s suggesting we remove features in the sense of what a language can ultimately express, but instead cruft like vestigial APIs. It’s cool to abstract them away but I still think that’s missing the mark a little bit.

That would be like trying to get more people interested in writing fiction by either removing words from the vocabulary or by creating new metaphors/symbols for complex ideas. Creating new metaphors for complex ideas is a great skill and tool for writing, but it’s not necessarily one that makes writing itself easier.

I think one of the keys to creating a society where everyone can program is to change the nature of what it means to write a program. I think we need to have it possible for people to express their intent in a more natural way. When humans don’t know what a word means, they infer it from the surrounding language. When humans don’t have a word for a certain meaning, they create one to fill that gap. Why can’t programming be so natural?

I just want my phone and my iPad to do a lot more than “apps-as-entertainment” allow them to do, too.

We’re not seeing a more sophisticated level of software on iOS not because the iPad is a weak computer. Not because touch interfaces are toys. But because the economics of the App Store make sustaining such an app near impossible. It’s simply not worth the investment.

Exactly. If you charged 50 bucks for an app that actually did something, you’d probably lose a lot of sales vs selling for 99 cents. But I think software developers shouldn’t let that scare them away from making sophisticated apps. Short comic strips probably get a much larger distribution than novels, but that doesn’t mean novels shouldn’t still be written.

It’s basically all you hear about in the Apple nerd press: “Apple’s working on a new device that’s going to revolutionize something or other”. It might be a watch, it might be a television, we don’t know what it is but all we know is somehow the device — the hardware — is going to make our lives better. I think that’s a myopic outlook that really offers nothing novel other than a new piece of metal and plastic to hold or gawk at. I don’t think we need new hardware.

What Apple does is identify a category of product in which there’s a lot of potential, where there will clearly be an audience, but where there’s currently no product that doesn’t completely suck. Then it makes a product that doesn’t suck in that category and mops up. It’s a beautiful strategy. And it happens to work.

So where are the crappy wrist computers? There’s the Pebble, I guess. A scrappy Kickstarter project that got some of us nerds excited last year. It’s severely limited in features and not altogether fashionable. So there’s potential for ass-kicking, no doubt. But is that all there is out there today? Where’s Microsoft’s wrist computer? Google’s? Sony’s? Samsung’s?

[…]

My point is, if this were the Next Big Thing, wouldn’t others be trying to do it already? Where’s the clear existing audience Apple wants to tap?

I agree with Joe: an “iWatch” certainly doesn’t match the pattern Apple usually follows, and I would say for a good reason: most customers aren’t asking for it and a newer, micro-device which (probably) runs iOS offers almost nothing above the current hardware we already have.

I don’t think Apple needs any new hardware at all in order to bring the world innovative new products; instead they need to provide us with new ways of working with software.

If you have an ear turned to the Apple news beat, it seems as though new hardware product launches are all anyone cares about. While actually, software is responsible for an overwhelming majority of our experience using Apple platforms. This fact has been deemphasized by the Apple community over the last few years as we rush to see the next new device for our pockets, and it’s about time software gets its share of the attention.

[…]

Software is the real frontier on our new mobile platforms. Apple’s new hardware breakthroughs come on the order of decades, not years. Yes, I’m judging iPhone and iPad as a single line of innovation, because that’s how it really shakes out. Do the platforms serve different needs, yes, but they come from the same core ideas and design compromises. If you’re waiting for a watch to come change your life, you might as well buy Google Glass (is that supposed to be plural, I can never tell) and get it out of your system.

Whether or not Apple continues to release new hardware platforms is still an unknown, but my disdainful guess is they probably will keep releasing gizmos and ignore the bigger picture of the software that runs on them. It’s what people seem to care about, and it’s what sells in the press.

And why do we care so much about the hardware anyway? I think it’s because, nerds though we may be, it’s still much easier for us (and especially for non-nerds too) to understand something physical than it is to understand something abstract like software. Physical things are tangible but they ultimately depend on the abstract. Of all the physical inventions through the history of humans that we know of, all of them have arisen from a mental, abstract thought. And the best ones, written language, the printing press, the World Wide Web, and even in some regards the handaxe, all of these best inventions allowed for expanded thought and new physical inventions. But none were purely physical.

And a technological society based solely around physical devices is one that lacks imagination to truly take advantage of all those lovely hardware platforms anyway. It would be like a literary society obsessed with printing presses and cover stock. And yet that’s exactly what we expect of Apple and Google and Facebook and all the other tech companies.

I’m not saying there is no room for hardware improvements either evolutionary or revolutionary. I think it’s great for Apple to continue iterating on the Mac, iPhone and iPad and continue to bring us better battery life, performance, and graphics. And I think there are still many more revolutionary improvements which can be made to products of their ilk: things like print-resolution displays (Retina displays are a great step, but they still pall in comparison to the information density we expect from a printed book or newspaper); light, thin, and flexible computers that can be carried around and manipulated as easily as paper; and tactile interfaces so that we can make better use of our extremely dexterous and sensitive hands and fingers when exploring software.

But all of these hardware advancements should come to facilitate the software, not to sell more hardware or to fulfill some science fiction pipe dream. It’s not time to stop thinking inside the box or outside the box. It’s time to stop thinking about boxes altogether.

Bret Victor published a long essay entitled “Learnable Programming” in September 2012 in which he described principles for creating both better programming languages and better programming environments for beginners and experts alike. But unfortunately, not everyone agrees with his stance.

Many expert programmers still exhibit forms of machoism when it comes to programming, which I find does more harm than good. Instead of acting like a voice of skepticism, it comes off as a voice of elitism with a disregard for the difficulty of beginners to develop computer program writing skills, the difficulty of programming as an expert, and the importance of a computer-literate population.

Mark Chu-Carroll objects to Bret’s stance, and the idea of programmer’s making it hard for beginners to program on purpose:

For some reason, so many people have this bizzare idea that programming is this really easy thing that programmers just make difficult out of spite or elitism or clueless or something, I’m not sure what. And as long as I’ve been in the field, there’s been a constant drumbeat from people to say that it’s all easy, that programmers just want to make it difficult by forcing you to think like a machine. That what we really need to do is just humanize programming, and it will all be easy and everyone will do it and the world will turn into a perfect computing utopia.

I don’t think Bret is arguing that at all. He’s not saying programmers have intentionally made it difficult for outsiders to join our circles, but that, well, it just is hard for outsiders to join. That instead of explicitly not doing our best, we have been doing our best but that our best isn’t good enough, and the sooner we can admit that and start improving, the better. This is not a bad thing. Improvement is what programmers do all day long, so why not also improve programming itself?

Mark continues:

To be a programmer, you don’t need to think like a machine. But you need to understand how machines work. To program successfully, you do need to understand how machines work - because what you’re really doing is building a machine!

Again, I don’t think Bret is advocating not understanding how a machine works. In fact, I think he’s advocating quite the opposite — by creating a better programming environment and language, it can better enable a new generation of programmers to visualize and understand their programs than ever before. I’ll return to this point in a moment.

Victor thinks that programming itself is broken. It’s often said that in order to code well, you have to be able to “think like a computer.” To Victor, this is absurdly backwards—and it’s the real reason why programming is seen as fundamentally “hard.” Computers are human tools: why can’t we control them on our terms, using techniques that come naturally to all of us?

The main problem with programming boils down to the fact that “the programmer has to imagine the execution of the program and never sees the data,” Victor told me.

Or as Bret wrote in his essay:

Maybe we don’t need a silver bullet. We just need to take off our blindfolds to see where we’re firing.

One of the first things beginners do in any area is learn the terms, after which I believe the labelling of program constructs becomes annoying rather than helpful. We wouldn’t have a mouse-over helper in Maths saying ” ‘+’ is the symbol meaning add two numbers” or in French saying “Je means I” — you learn it early on, quite easily, and then you’re fine. The point of the notation is to express concisely and unambigiously what the program does. I can understand that the labels are a bit more approachable, but I worry that for most cases, they are not actually helpful, and very quickly end up unwieldy.

But again I feel like this is missing the point. I think the example of labels in the programming environment are really just a stepping stone — one stop on the road to being able to see and understand what a program is doing — but it’s not the only thing. Labeling the environment is one thing, but the concept can extend further to enable the experts to reach a higher ground. Sure, experts already know the syntax and probably most of the library functions too. Great, now that can be trivialized, and even better, new and more specific program parts can be visualized. Now things which are specific to the application can be labeled and explained, in context, for all developers of a given project.

Neil continues on other topics of visualization:

I propose that visualisation doesn’t scale to large programs. It’s quite easy to visualise an array of 10 elements, or show the complete object graph in a program with 5 objects. But how do you visualise an array of 100,000 elements, or the complete object graph in a program with 50,000 objects? You can think about collapsible/folding displays or clever visual compression, but at some point it’s simply not feasible to visualise your entire program: program code can scale up 1000-fold with the tweak of a loop counter, but visualisations are inherently limited in scale. At some point they are all going to become too much.

I think this is a very narrow-minded way to approach Bret’s essay. As someone who writes code blindly like we currently have to do, of course we’re going to have a hard time coming up with ways to visualize our data, but fortunately for us there’s a whole field to solve this very problem: Data Visualization (for anyone who’s interested in learning more on this topic, you absolutely should read the works of Edward Tufte). As programmers, we’re bad at visualizing data because we’ve never thought of it as a necessary skill. But once our eyes are open to the benefits of data visualization, then not only does it not seem impossible, it also starts to seem necessary.

Neil thinks we don’t have to see to understand:

Someone once proposed to me that being able to create a visualisation of an algorithm is a sign of understanding, but that understanding cannot be gained from seeing the visualisation. Visualisation as a manifestation of understanding, rather than understanding as a consequence of visualisation. I wonder if there’s something in that?

I disagree, and believe there’s much to be gained from understanding the relationship of seeing and understanding a concept. Alan Kay, building on the work of Piaget and Bruner had the insight which he summarized as follows:

Doing with Images makes Symbols.

This is a relationship between three human mentalities, where we work with body, the visual system, and the symbolic mind in different but complimentary ways. These act as a continuum of thought and interaction and movement within that continuum is essential. So to have real understanding of something on the symbolic level, it’s so much more natural to achieve this if you not only have images to work with, but also actually interact with those images as well. This is one of the essential, founding principles of the modern graphical user interface, a fact which is lost on almost all of its users.

Neil concludes his argument:

I like the blindfold metaphor, because it fits with our understanding of expertise: “he can do that with his eyes closed” is already a common idiom for expertise in a task. Beginner typists look at the keys. Expert typists can type blindfolded. Therefore at some point in the transition from beginner to expert typist you must stop looking at the keys. So it is with programming: you must reach a stage where you can accept the blindfold.

Which unfortunately also brings to mind The blind leading the blind metaphor. Lots of experts claim to be able to do something “with one hand tied behind my back”, but none would elect or suggest always working under such conditions. Nobody should be proudly held back at doing the best they possibly can at their work! Accepting the blindfold conditions for both beginners and experts alike is accepting the current state of programming as the best it can be, without any hope for improving the situations for generations to come.

At the end of the essay, Bret says what I believe is the real crux of his argument:

These design principles were presented in the context of systems for learning, but they apply universally. An experienced programmer may not need to know what an “if” statement means, but she does need to understand the runtime behavior of her program, and she needs to understand it while she’s programming.

Our society has deemed book literacy an essential skill as it’s a key mechanism in which our society thinks. Computers can offer an even better medium for society to think in, but only if we strive for computer literacy as well. And as with written literacy, this means both reading and writing. Expecting an entire society to write programs the way “experts” write them today is ludicrous, inscrutable, and counterproductive. If we’re to expect members of society to be computer literate, then we must create for them an environment where thinking can be expressed even better than on paper*.

*Yes, this is one reason why Cortex has yet to be released. I’ve yet to solve the problem of understanding and visualizing a Cortex plugin, and without that, it’s cripplingly difficult to create useful programs. This needs to be solved, because it’s irresponsible to expect developers to imagine it all in their heads.

I get really fired up when I think about one of The Greats, one of the people or teams of people in my field who I think are truly exceptional, who have contributed substantial work and who are rewarded copiously for it. They’re loved by some and reviled by others, but the common quality is they change things.

These are my heroes, the ones who make me want to get out of bed every day and be better than the day before at what I do. They set a bar for me, and I don’t want to be just like them, but I want to be great in my own ways. I’m not looking for fame, I’m only looking to be one of the Greats. I’ve been studying them for a while now and here’s what I’ve picked up so far, that they all have in common:

They have Powerful Ideas.

They act on those ideas.

In the simplest, most essential distillation, that’s what they do.

A Powerful Idea isn’t just a good idea, but instead one that lets us see farther. John W. Maxwell has this to say:

What makes an idea “powerful” is what it allows you to do; […] Powerful ideas are those that are richly or deeply connected to other ideas; these connections make it possible to make further connections and tell stories of greater richness and extent (p 187).

These are ideas like Hypertext, the Graphical User Interface, Cut Copy and Paste. Things that are simple in their own respect, but enable a tremendous new reach for humanity. They are not goals or destinations, but instead vehicles for getting us to the next step.

These ideas often don’t appear in dreams or apparitions but are instead culminations of years of dedicated study across a diverse set of fields. Alan Kay studied biology in university, which enabled him to see and create a design for Object Oriented Programming. He modeled computer programs after living cells. Many of Bret Victor’s great insights arise from an application of Edward Tufte’s information visualization principles: Show the Data and Show Comparisons.

When you study the powerful ideas of any field, you’ll almost always see the ideas emerging from analogy and synthesis of ideas from many other, seemingly unrelated fields. The insights often become obvious once you start looking past your own domain.

But a powerful idea is often not enough. Vannevar Bush’s As We May Think described the Memex, a mechanical, computerized contraption resembling a steampunk lovechild of the World Wide Web and Wikipedia, in 1945, and yet Bush’s work largely remained in obscuria for nearly fifty years. Why? Because the ideas were ahead of the technology at the time and they couldn’t be built. It’s not a failing of the quality of the invention (da Vinci hardly never could build any of his own designs at the time), but it strikes an important chord: to be a Great, you really need to be able to build it.

I think it’s critical to get these ideas into some form of tangible space, whether it’s a working prototype or a full-fledged product. People need to be able to see and use it, because an idea isn’t set in stone. It needs to be living and evolving. There needs to be a discourse and that’s certainly part of what makes the Greats so great, is they participate in this discourse.

These aren’t the only things the Greats seem to do, but they are the most fundamental and everything I’ve noticed seems to emerge from them. They’re important traits to know, but the most important thing isn’t to set out to emulate. It’s important not to walk in their footsteps but to instead stand on their shoulders.

Dr. Alan Kay on the Meaning of “Object Oriented Programming”. A friend and I were talking about Kay’s original intentions for OOP the other day, so I thought this link might be interesting to others, as well. It turns out, OOP is a lot less about encapsulated data and methods on the data, and a lot more about messages between “little computers”:

The original conception of it had the following parts.

I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning – it took a while to see how to do messaging in a programming language efficiently enough to be useful).

[…]
OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I’m not aware of them.

There’s my vote. Acorn has been using SQLite as its native file format since version 2.0, and it has been wonderful. When writing out and reading in an image I don’t have to think about byte offsets, I mix bitmap and vector layers together in the same file, and debugging a troubled file is as simple as opening it up in Base or your preferred SQLite tool. This sure beats opening a PSD file in a hex editor to figure out what’s going on.

Drawn. If you’re looking for an alternative way of interacting with drawn artwork from Bret’s talk.

People are alive — they behave and respond. Creations within the computer can also live, behave, and respond… if they are allowed to. The message of this talk is that computer-based art tools should embrace both forms of life — artists behaving through real-time performance, and art behaving through real-time simulation. Everything we draw should be alive by default.

There’s a common superlative you hear, especially if you’re in the aspirational period of a high school Guidance Counsellor meeting, where you’ll hear “Anything’s possible!” And as you grow up, you learn to sort of filter out that sentiment. You learn there are limits, you learn there are things in the universe which seemed obvious, but really presented hidden meanings. Maybe you learned this literally and ran into a sliding glass door. It happens.

Learning limits is an important method of survival, but becoming a slave to those limits can turn you from living to coping. It can do more harm than good.

If you’re a software developer, you’re in a lucky place, because almost anything you can imagine is possible. Yes, there are limits to what hardware and software can currently do, and there are limits in what they can do eventually, too. But far and away, what is possible with a computer is quite limitless.

If you accept this, then new things start to become quite clear to you. You begin to see software as an illustration of your thought, as a way to explore the logic in your head. In a book you can write what you’re thinking, but in software you can express not only what you think, but you can see how that thought holds up to scrutiny. This is a small and simple truth with vast and curious implications. It means your thoughts can not only be heard, but explored, by generations both present and future.

When you don’t accept possibilities of software, you become limited in the world as it currently exists. We have so many inventions in software which exist merely because the inventor didn’t know they weren’t impossible.

When asked how he had invented graphical user interfaces, vector drawing, and object oriented displays all in one year, Ivan Sutherland replied “Because I didn’t know it was hard”. He had no preconceived notion of what was possible, so there was no hinderance. He just invented what he felt he needed in order to express his ideas.

Bill Atkinson infamously didn’t know overlapping windows were hard, but he invented them anyway. As an early Apple employee, Bill was one of the visitors of the famed Xerox PARC visits where he observed early versions of the Alto computer. He thought he’d seen a system with overlapping windows, so when he had to design a graphics system for the Mac, he had to build those too. Little did he know, the Alto never had overlapping windows to begin with. He had invented them because he was under the impression they were possible. Imagine!

Software today exists with much already established before us, but we’re still in the Incunabula stage. We’re still establishing the rules. Although there might appear to be much precedent in how something works, the truth is we’re in the very early stages. The rules are mutable and many are still yet to be written. We’re at a point where it’s critical to continue to explore new ideas, even if there’s only the slightest tinge of possibility, more often than not it turns out to be not only possible, but a superior approach.

Don’t ever let what you think you know dictate what you feel might be possible.

Linger Trailer. When I published my review of Linger the other day, I neglected to link to the trailer, so I’m doing that now. It’s so well done, it deserves a link of its own.

I’m going to get right down to it: you should buy Linger for iOS (here’s an App Store link). The app lets you explore the Prelinger Archives, a collection of short movies, ads, PSAs and propaganda from the 20th century all on your iPad (it works great for iPhone too).

Even if you’ve never heard of the Prelinger Archives before, you’re probably still familiar with the style of videos you’ll find there, the old black and white movies, showing assorted clips with a wholesome-sounding narrator. Think “Duck and Cover” or “Reefer Madness”, and you’ll get the idea.

The app, which requires iOS 6, is a perfect display for the content. From the welcome tour to the multiple ways to browse content, you can tell the developer has put a lot of thought and care into the experience. It basically looks like what an iPad app from the nineteen-fifties would look like, replete with poster themes and the perfectly chosen fonts. This app nails the style for its content.

So once you pick a video to watch (the videos are usually short in length, but there are some longer ones in there, too) the presentation works just as you’d expect. Most of the videos are in black and white, although there are some colour films as well.

The app is fast, responsive, beautiful and clever. It’s a great way to learn about early film culture, and it’s a great contribution to the App Store, but I wanted to find out more about the motivation behind it, so I interviewed the man behind it, Chuck Shnider to find out more.

Jason: Where did the idea for Linger come from?

Chuck: Linger was really a classic “scratch an itch” software project. I’d spent time off-and-on watching ephemeral films on my iPad, but was always frustrated by how difficult it was to browse online to find good stuff to watch. Some of those difficulties are rooted in limitations of simple webpages, while others were related to inconsistencies with the films’ metadata. Sometimes it boiled down to something as basic as a film being split into multiple parts, but there being nothing obvious to tell you that there were multiple parts, and where they could be found.

The iPad itself is very well-suited for the task of one person exploring a series of short films. At some point I just got sick of the hassle of watching the films on the web and started looking at what sort of open data I could get from archive.org, and started coding up an app.

Jason: How was the app developed (and how long/when did you work on it, etc.)?

Chuck: The app was developed over a period of 9 months as a side-project. Along the way, I also wrote a Mac app to process and analyze the raw metadata from Prelinger Archives. I mentioned earlier about inconsitencies with the source metadata. By applying some love to the data, users of Linger are spared some of that. However, there are few shortcuts when it comes to cleaning up that data. I do what I can with automation, but many of the corrections I did were applied by hand based on errors that were discovered by hand.

Jason: Did you build on any existing 3rd party technologies or do anything exciting with Apple’s frameworks?

Chuck: Most of the third-party stuff I use is pretty mainstream: AFNetworking, mogenerator, TransformerKit, HockeySDK. Beyond the more common libraries, I use a third-party library called KSScreenshotManager to help automate production work for App Store screenshots, and also for images which form the basis of the app’s launch images.

The app requires iOS6, and makes heavy use of UICollectionView, Auto-Layout, and Storyboards.

For graphic assets, I’m using stuff from The Noun Project and Subtle Patterns. I also spent a considerable amount of time searching for suitable fonts which had licenses that permitted embedding within a mobile app.

These aren’t technologies per se, but I’d be remiss if I didn’t also mention the great support I’ve received from Rick Prelinger, plus the video and metadata hosting provided by the Internet Archives. For an individual developer to ship an app like this, it’s essential to find a way to host all the video content for little or no cost.

Jason: Why are you highlighting the Prelinger archives? Why are they important for people to see?

Chuck: The heyday of ephemeral films was really from the 1930s to 1960s. This was also the period where corporations and governments really learned to use moving images to influence public opinion through advertising, education, and propaganda. For anyone who is a student of media literacy, consumer society, or 20th century American history, I think there is much to learn from watching these films. If you like kitsch, of course there is tons of that. Watch a little longer, though, and it’s almost inevitable that deeper themes come to the surface.

There are a few large collections of ephemeral films online. Prelinger Archives was particularly suitable for making into an app because the collection is well-researched, and they have clear terms of use for both the films and associated metadata.

Jason: What do you imagine for the app in the future?

Chuck: In the near-term, I have a few features planned that are “creature comforts” for more habitual users. Beyond that, I plan to focus on helping new users find interesting films to watch. There is definitely a lot of room for improvement there, and I think it would help new users to become the sort of habitual users who end up recommending the app to friends, etc.

Jason: Do you have or plan to have any kind of analytics in the app to figure out what viewers are watching? It seems like that might help to fuel more people to discover new gems in the app.

Chuck: No formal plans, but if a large-enough community of viewers does form to make the numbers meaningful, then it is something worth looking into. I’d also want to feel like the extra value from “top viewed films” was worth it to users in exchange for the anonymized usage data they would need to provide. With an app like this, I think of the user experience as more like visiting a library than watching videos on YouTube. What films you are viewing should be considered private, unless you decide to share on-purpose.

On a related vein, I’m looking into creating a venue where I can write a bit about films I’ve personally found interesting. One side-effect of developing an app like this is you get to watch a lot of the films. I haven’t settled on a format yet, but it may start out as a blog, and in time also incorporate the blog content directly into the app as well.

News is irrelevant. Out of the approximately 10,000 news stories you have read in the last 12 months, name one that – because you consumed it – allowed you to make a better decision about a serious matter affecting your life, your career or your business. The point is: the consumption of news is irrelevant to you. But people find it very difficult to recognise what’s relevant. It’s much easier to recognise what’s new. The relevant versus the new is the fundamental battle of the current age. Media organisations want you to believe that news offers you some sort of a competitive advantage. Many fall for that. We get anxious when we’re cut off from the flow of news. In reality, news consumption is a competitive disadvantage. The less news you consume, the bigger the advantage you have.

News has no explanatory power. News items are bubbles popping on the surface of a deeper world. Will accumulating facts help you understand the world? Sadly, no. The relationship is inverted. The important stories are non-stories: slow, powerful movements that develop below journalists’ radar but have a transforming effect. The more “news factoids” you digest, the less of the big picture you will understand. If more information leads to higher economic success, we’d expect journalists to be at the top of the pyramid. That’s not the case.

He adds, and I agree, long-form, exploratory journalism is still very important and should exist. It’s time we have a method of expressing such inquiries in a way readers can better grasp and evaluate the ramifications they present. Videos and slideshows are not enough.

As promised, I wanted to start sharing some of the reasons I’ve been digging Stikkit, so I thought I’d begin at the beginning: Stikkit’s use of “magic words” to do stuff based on your typing natural (albeit geeky) language into a blank note. There’s a lot more to Stikkit than magic words, but this is a great place to start.

Computer Science education at universities in North America is typically a mix of Computers (learning programming language concepts, formal languages and proofs, data structures and algorithms, electronic architecture, etc.) and Math (algebra and calculus, linear algebra, statistics, number theory) but I think this education misses entire broad strokes of the elements involved in building software: making something to augment a human’s abilities.

Having a solid base in the Computer bits of Computer Science is essential as it enables the How of what a software developer makes. But it doesn’t shine any light on the Why a software developer is making something. Without knowing that, we’re often left shooting in the dark, hoping what we make is good for a person.

I’m proposing the education should focus less on mathematics and instead focus more on Human factors, specifically (but not limited to) Psychology and Physiology, the study of the human mind and the human body, respectively, and how the two parts interact to form the human experience.

By understanding the human body, we learn of its capabilities, and just as importantly, of its limitations. We learn about the ergonomics of our limbs, feet and hands, all of which inform the physical representations of how software should be made. We learn about a human’s capacity for sensing information, specifically from the eyes (like our ability to read, understand and parse graphical information like size, shape, and colour) and the hands (like our sophisticated dexterity and sense of tactility, texture, and temperature). Instead of pictures under glass, we’d be more informed to interact in more information rich ways.

By understanding the human mind, we learn how humans deal with the information received and transmitted from the body. We learn about how people understand (or don’t understand) the things we’re trying to show them on screen. We learn how people model information and try to represent our software in their minds. We learn that people represent some things symbolical and other things spatially, and we learn why that difference is important to building useful software.

We also learn how people themselves learn, how children are capable of certain cognitive tasks at certain ages, and how they differ, cognitively, from adults. This allows us to better tailor our software for our audience.

Psychology does even better than teaching us how a person works inside because it also gives us the beginnings of how people work amongst themselves, and how people share their mental models with each other. Since humans are inherently social beings, we should be taking advantage of these details when we build software.

All of these details are crucial for building software to be used by people, and nearly all of them are ignored by the current mandatory parts of Computer Science curriculum. We learn lots about the mechanics of software itself, but nothing about what we’re making. It’s like learning everything about architecture without ever having once lived inside a building. We build software with our eyes closed, guessing as to what might be useful for another person when there are libraries full of information telling us exactly what we need to know.

We need to stop guessing and we need to learn about who we’re building for.

By now you’ve seen the news about Blink on HN or Techmeme or wherever. At this moment, every pundit and sage is attempting to write their angle into the annoucement and tell you “what it means”. The worst of these will try to link-bait some “hot” business or tech phrase into the title. True hacks will weave a Google X and Glass reference into it, or pawn off some “GOOGLE WEB OF DART AND NACL AND EVIL” paranoia as prescience (sans evidence, of course). The more clueful of the ink-stained clan will constrain themselves to objective reality and instead pen screeds for/against diversity despite it being a well-studied topic to which they’re not adding much.

And:

And that’s what you’re missing from everything else you’re reading about this announcement today. To make a better platform faster, you must be able to iterate faster. Steps away from that are steps away from a better platform. Today’s WebKit defeats that imperative in ways large and small. It’s not anybody’s fault, but it does need to change. And changing it will allow us to iterate faster, working through the annealing process that takes a good idea from drawing board to API to refined feature. We’ve always enjoyed this freedom in the Chromey bits of Chrome, and unleashing Chrome’s Web Platform team will deliver the same sorts of benefits to the web platform that faster iteration and cycle times have enabled at the application level in Chrome.

“Hello? Hel—yeah I’m still here. No it’s just the internet. Are you downloading anything on your computer right now? No? Well maybe tr—”

[The video resumes and audio comes back]

“Oh there we go, hi! Yeah things are great how are you?”

[You both talk at the exact same time because the audio is lagging so hard]

“What? Can you repe—”

[You both do that again]

“Yeah sorry, I think it’s just—”

[The image freezes; you hear no more sound]

Disconnected

“…”

*Calling”

Mom is unavailable for FaceTime

“…”

[You put down your iPad, walk over to your phone, and call your mother and actually have a conversation with her, implicitly admitting to yourself and to her that sometimes newfangled technology is done for our own sake, long before it’s ready for people and the world they live in, and giving her one more reason to think she’s bad at technology, when in reality it’s the technology that’s bad at people. You’ll also realize, if only slightly more than before, that you need to reconsider the next time you decide to add a “cool new feature” nobody was actually asking for if it won’t actually fit into the way they live just quite yet.]

Kay: It is. Complete pop culture. I’m not against pop culture. Developed music, for instance, needs a pop culture. There’s a tendency to over-develop. Brahms and Dvorak needed gypsy music badly by the end of the 19th century. The big problem with our culture is that it’s being dominated, because the electronic media we have is so much better suited for transmitting pop-culture content than it is for high-culture content. I consider jazz to be a developed part of high culture. Anything that’s been worked on and developed and you [can] go to the next couple levels.

Binstock: One thing about jazz aficionados is that they take deep pleasure in knowing the history of jazz.

Kay: Yes! Classical music is like that, too. But pop culture holds a disdain for history. Pop culture is all about identity and feeling like you’re participating. It has nothing to do with cooperation, the past or the future — it’s living in the present. I think the same is true of most people who write code for money. They have no idea where [their culture came from] — and the Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs.

On the Web:

Binstock: Well, look at Wikipedia — it’s a tremendous collaboration.

Kay: It is, but go to the article on Logo, can you write and execute Logo programs? Are there examples? No. The Wikipedia people didn’t even imagine that, in spite of the fact that they’re on a computer. That’s why I never use PowerPoint. PowerPoint is just simulated acetate overhead slides, and to me, that is a kind of a moral crime. That’s why I always do, not just dynamic stuff when I give a talk, but I do stuff that I’m interacting with on-the-fly. Because that is what the computer is for. People who don’t do that either don’t understand that or don’t respect it.

(I originally tried to link to the article on Instapaper but the link wasn’t public. If you use Instapaper, read it there so the article isn’t spread across four pages.)

Software developers have really crappy tools. If we’re lucky, we’ve got some limited graphical tools for creating user interfaces and some form of rudimentary auto-complete, but our programs still exist in text files, which amount to little more than digital pieces of paper. We want to augment programming languages with new IDEs and tools, but it’s often painful to graft these features on existing languages. What we need are languages built around our tools, not the other way around.

The current best effort for better technical tools in my opinion is Apple’s Xcode, specifically the Interface Builder tool. As the name suggests, IB is a tool for creating user interfaces graphically. Instead of writing code to layout your interface elements, you use IB to position them. You get the benefit of seeing what your app is going to look like as you’re working on it, without the need to stop and rebuild your project with every modification.

Putting aside my qualms with how Interface Builder really works in practice, I do think it’s a great idea. But it’s really not enough, because the interface elements laid out are pretty dumb — you can’t really interact with them much until you actually build the application — which stops pretty short of allowing you to feel how your application really works in practice. You see how it looks with skeletal data, but you don’t get to see how it works.

Another element of Interface Builder are interface Bindings (these are Mac only), which, in theory, allow you to wire up your data to interface elements in your application without having to write any code to do so. Another incredible idea in theory but in practice it’s very difficult to get right, and it’s maddening to debug.

Why are these tools so hard to get right? It’s because we start with a programming language, which in many cases was developed decades ago during different constraints and conditions, and we try to graft modern tools and ideas atop it. In Objective C’s case, the language has its roots in the 1980s (or 1970s, due to C). We’re trying to solder on modern bits to old harnesses, what Wolf Rentzsch refers to as bolting on an air conditioner to a go-kart. Sure, it’s possible, but the effect feels pretty bad. It doesn’t mean air conditioning is bad, it just needs to be installed in the right environment.

Language makers need to change what they focus on. Instead of trying to add nice things to the environment (IDE, Tools, etc) that fit the language, they should instead design a good environment for building software and then design the language around that. Instead of taking decades-old programming language and grafting on an environment, I think it would better to first design a programming environment and then build a language to suit it.

As an example, I’m going to list some things I believe would be good in such a programming environment, and although this list is neither perfect nor exhaustive, I hope it illustrates the point of how programming languages can be designed in the future.

Programmers should be able to see their changes reflected immediately. I’m definitely not the first one to believe in this principle and I’ve made my own stabs at it with the Super Debugger too. But it’s really tricky to get this right with programming languages which weren’t designed with this principle in mind. Imagine how much more flexible the language could be if this were intended behaviour?

The environment should allow for connecting data to graphical representations without the need for excessive glue-code. This is what Bindings on the Mac attempt to solve, but they themselves are glued on. The environment should support this from the ground up so that programmers can easily do the repetitive task of attaching their database or web service data model to a graphical display.

The environment should allow for better traversal of program code than just classes in files. Groups of related functions and methods should be brought together as the programmer is working with them. The programmer shouldn’t have to hunt for method declarations or implementations, nor should they have to hunt for documentation on the functionality. This should be brought to the developer’s field of vision as it’s needed.

Relying on plain text files for source code drastically reduces the flexibility of the programming environment. Instead, the files can be stored in a smarter format as required by the programming environment. We already rely so heavily on our environments as is, as we have long method names which require auto-complete and stub/generated functions that we can never remember. And yet we feel shameful when we use a plain-text editor but we can’t remember how to code without the IDE. Why not instead embrace the IDE and not be ashamed when using a language-ignorant text editor?

Features like auto-complete are great, when they work. But imagine how much better they would be if the language was designed from the beginning to support them?

These are just examples and many of them have existed as ideas for decades at this point, neither of which is important. The point I’m trying to make here is that whatever the principles of programming environment design are, it should be those principles which dictate how the programming language should work. We should strive to figure out the kinds of tools developers need to create exceptionally powerful programs, and then design programming languages to enable those tools. It doesn’t mean the IDE has to be written in the new language, but just that it should be written with it in mind.

Software is currently the best medium we humans have to express our explanations and explore our ideas. And yet the way we express these programs is limited to what are essentially digital versions of pieces of paper. It’s time we start building better ways to express ourselves with interactive media, and that means building the backbone language around the environment in which it lives.

If you find this idea intriguing, you should definitely come hear me speak at NSNorth in April 2013 in Ottawa. It’s going to be the start of something great.

There are many apps which try to help you out by aligning some of their functions to happen on a per-day basis, whether it’s a reminder or a calendar event, or some other kind of task which has a day-bound relevance. This is a good idea dogmatically but I’ve found all implementations to fail in a pragmatic way: the day doesn’t start at midnight.

The best example of an app violating this is Things (both for Mac and iOS), which has a handy feature for recurring todos. The gist of the feature is “repeat this task every X days (or weeks or months, etc.) after I’ve completed it” with the idea being, I’d like a repeating task, but I only ever want to see it at most once in a list — if I haven’t completed it by the time it’s scheduled to appear again, don’t show it until I’ve marked it as complete and the proper time has elapsed.

In theory this works really well. I’ve got a task to “do some dishes” once per day, but if I happen to miss a day, I don’t get two todos the next day, I just stick with the one. Once I check it off, it recurs again the next day, where the day starts at midnight. Here’s where the problem is:

I stay up a little late at night and usually go to bed between midnight and 2AM. As I’m getting ready for bed, I’ll often review the day’s tasks in Things and mark off my stuff as completed, often things I’ve forgotten to mark as completed while I was going about my day. So let’s say it’s 1AM on Tuesday (technically this is Tuesday, but since I haven’t yet gone to bed, it’s really still Monday to me) and I mark off my recurring “do the dishes” todo. That’s great, and I expect to see the todo again tomorrow (during the daytime of Tuesday). But this is where my view of the situation diverges from that of the software: to me, “tomorrow” means “any time after I wake up and get dressed but before I go to sleep at night”, but to the software it means “any time after midnight”.

What ends up happening, because of our silly disagreement, is Things thinks I’ve already marked the task as being done for what I’d consider “the next day” and instead won’t show me the task again until after the next midnight rolls around. So in this case, I don’t see the task at all on Tuesday and it doesn’t show up until I start using the app Wednesday morning.

In this case, it’s not so grave because, well I’ll probably see the dishes and remember to do them anyway. But it’s still an error in pragmatism for the software to do something like this. There are probably way more users who go to bed at 2AM than who wake up and start their day at 2AM. And yet our software almost always treats us as though we’re mechanically bound to clocks, that our lives are grasped tightly by their hands.

A slightly better example of software handling this is with Siri. If it’s a little after midnight on Monday (so technically Tuesday) and you say “Siri, remind me to do the dishes tomorrow morning”, Siri will respond with something to the tune of “Just to be sure, did you mean Tuesday or Wednesday”. This is a step in the right direction, but it’s still an extra step the person almost never needs to take.

How to solve this problem

The obvious first solution to this problem is to simply have a setting in your application which says “The day starts at X” and let the user pick a time. That works, but it still pretty much stinks because the user is going to have to set this for every app which supports it, and it might change over time as the user’s habits change (student life to working life to parenthood, for example), not to mention many users probably won’t dive through the settings and designate a particular time anyway, so the program remains daft, treating the user as if they’re a clock.

The better solution is to infer what time the person starts and ends their day. It’s pretty easy to figure out with some simple usage statistics, by using what times of day the app is used (of course treating weekends slightly differently). If the app is used late at night, then you learn usage patterns and adjust the day cutoff to match the usage. If the app doesn’t get used late at night, then you don’t learn anything, but you don’t need to because the app doesn’t get used late anyway, so it doesn’t matter. A simple example of usage recognition can be found in Bret Victor’s Magic Ink in the “Engineering inference from history” section.

If you have to, be smart by being stupid

At the very least, consider solving the problem by making the day end later than midnight. You won’t throw off anyone by making the day end at 3AM vs midnight, and there’s no sense pandering to the edge case of those starting their day that early anyway. It’ll make your software work more like people do.

In January, while still at Shopify, I released the Super Debugger for iOS, a wireless, interactive, and realtime debugger for iOS applications. That means you can debug your applications over wifi (and potentially, cellular), without needing to set breakpoints. You can send messages to your objects, you can see their state, and you can change their state all in real time. The project is open sourced on Github, too.

From the project’s homepage:

The Super Debugger (superdb) lets you debug in new ways lldb can’t: it allows you to send messages to the objects in your app, without the need to stop on breakpoints.

Use the powerful Shell on your Mac to inspect your objects, see changes instantly, and speed up development.

The project started as an internal “Hack Days” project at Shopify, where we got two days to start and “ship” (or at least demo) a project. As I’m a Cocoa developer, I had been thinking of ways to make development easier, and superdb was the result.

The Details

The Super Debugger builds upon F-Script, a Mac project by Philippe Mougin. F-Script has been around for probably close to a decade now, and it works as an object browser for Cocoa objects. Philippe refers to it as “a Finder for your objects”, which I think is a great description. It’s a programming environment with a Smalltalk-like syntax where objects can be inspected and messaged.

The project itself doesn’t appear to be maintained very much anymore and it was Mac only until Github user “pablomarx” got a version of it running for iOS. Even though the iOS code had rotted since pablomarx had ported it, it was still a great accomplishment, but it was more of a proof-of-concept than anything else. The testbed was an iOS application with a text field and an output log. It showed that it worked, but it wasn’t exactly useful.

The technology had been around for a while, and yet nothing useful was coming of it. I thought about it for a while and decided my Hack Days project was going to make use of this technology.

So I started by modernizing the iOS port of F-Script, wrapped it up in a network service, used some Bonjour magic to find running instances of it on a local network, wrote a socket protocol between the F-Script interpreter and a Mac app, wrote a command shell for the Mac app and presto, the Super Debugger was born.

Even though it might sound like a lot, the real meat of the operation took only those two days at Shopify. The technology had existed for years and yet all it took was a couple of days to make something tremendously useful out of it. It might sound kind of self-congratulatory, and it is as I’m very proud of what I’ve made, but my point is sometimes wonderful things are hidden under the word “just”. Sometimes there are brand new avenues we’d never even considered and all that was missing was the tiniest of pieces.

The bigger moral of the story is just because what you’re adding to something doesn’t seem like much, or isn’t difficult to create, doesn’t mean it can’t have a profound impact on what you’re making. Slapping a network layer on an interpreter someone else wrote and adding a few stolen interaction tricks may sound like cheating, but it’s thinking like this we desperately need more of in this world.

Taking advantage of recent advances in flexible electronics, researchers have devised a way to “print” devices directly onto the skin so people can wear them for an extended period while performing normal daily activities. Such systems could be used to track health and monitor healing near the skin’s surface, as in the case of surgical wounds.

One of my tricks for generating startup ideas is to imagine the ways in which we’ll seem backward to future generations. And I’m pretty sure that to people 50 or 100 years in the future, it will seem barbaric that people in our era waited till they had symptoms to be diagnosed with conditions like heart disease and cancer.

The topic of DeRose’s lecture is “Math in the Movies.” This topic is his job: translating principles of arithmetic, geometry, and algebra into software that renders objects or powers physics engines. This process is much the same at Pixar as it is at other computer animation or video game studios, he explains; part of why he’s here is to explain why aspiring animators and game designers need a solid base in mathematics.

Got that? We’re talking about children’s toys built by an AI scientist from where Siri was born, that tracks human movement, can interact with spoken words, is connected to the web and mobile by an engineer with a world-beating scalability background, promoted by an early advocate of blog publishing software that changed the world and designed by people behind the most popular children’s movies in history.

Can you trust a teacher whose only connection to a subject is teaching it?

How can such a teacher know if what he’s teaching is valuable, or how well he’s teaching it? (“Curricula” and “exams”, respectively, are horrendous answers to those questions.)

Real teaching is not about transferring “the material”, as if knowledge were some sort of mass-produced commodity that ships from Amazon. Real teaching is about conveying a way of thinking. How can a teacher convey a way of thinking when he doesn’t genuinely think that way?

I’m sure many teachers spend their evenings thinking about teaching the subject. I have no doubt that these teachers love teaching, and love their students. But to me, that seems like a chef who loves cooking, but doesn’t love food. Who has never tasted his own food. This chef might have the best of intentions, but someone in need of a satisfying meal is probably better off elsewhere.

The cliché you hear is “Those who can’t do, teach” when in reality it’s those who can do who should be teaching.

Computer Science education is in need of vast improvement. We’re taught low-level details of how software works at an atomic level, but we ignore the human side of software. I’m not talking about user interfaces, I’m talking about ignoring the humans who make software and the humans who use try to get things done with it.

Everybody believes their line of work is an essential part of the world — and they’re completely correct — but our current age is one built precariously on science and technology. Almost all of Western human culture is either derived from or delivered through some kind of digital orifice. This means there is an incredible need put on those who create and build the technology, and because there’s a lack of education, this also puts an incredible strain on those very same people.

In other aspects of culture creation, in trades such as carpentry or graphic design, the education includes learning the constraints of the craft, (like the relationship between the wood and the saw, or how colours render differently between screen and print) and it also includes fundamental principles like aesthetics of form or typography — qualities of the trade which are the result of learning from human experience over the course of centuries.

Computer Science education focuses almost entirely on the former. Students are taught how the computer works, and, beginning at the theoretical ground, learn how software can be represented as fundamental processes (as described by Alan Turing) all the way up to how good object-oriented systems are to be designed. We learn about data structures and algorithms and we learn why some are suitable in some cases but not others. We’re essentially taught the mechanics, but we’re taught nothing of what properties emerge from these mechanics.

Our field is nascent, and although it looks like we’ve been stuck with things like unresponsive, unhelpful graphical user interfaces for a long time, they’re really just the beginnings of what interactive digital machines are capable of doing. What they don’t teach you in Computer Science is basically anything you can imagine is possible. The bigger problem is student imagination is stifled by the status quo, instead of being nurtured by education. We’re often asked not to re-think how to solve problems for people but instead taught how the mechanics of existing practices already work. We’re not taught to be brilliant, creative thinkers, but instead taught how to become cogs, manufacturing computer programs.

The saddest part is, those who we should be learning from remain mainly ignored in Computer Science education. There have been many great thinkers in our field, from Alan Turing to Stephen Wolfram, from Vannevar Bush to Douglas Englebart and Tim Berners-Lee, from Alan Kay to Bret Victor. There’s an absolute treasure trove of great thinkers in Computer Science (and thanks to the natures of computers, almost all their work is dutifully digitized and readily available) who go almost entirely unnoticed in Computer Science education. There are great minds, who have solved the same problems over and over again, or whose ideas were decades before their time, who go completely unmentioned in the four years of an undergraduate Computer Science degree. How can we call ourselves educated in this field if we know nothing of its masters?

We’re learning how to build bricks but not how to build buildings for we learn nothing about how architecture applies to humans, and we learn nothing from the great architects who’ve come before us.

We can fix this by rattling the cages. Those great masters who have come before us didn’t exist in a vacuum and they didn’t invent everything all on their own. They saw further by standing on the shoulders of giants. Their ideas are dangerously important, but they didn’t emerge out of the ether. And so like them, we need to learn from the greats. We need to learn not only about how to build software, but we need to question and examine the fundamentals of what we’re even building. We need to demand for an education where ignoring past bodies of information is a travesty, and we need to demand the same from ourselves. If you’ve already finished your university education, don’t worry, because we all continue to learn every day.

So read about and learn from the Greats. And more importantly, help others do the same. Start talking about a Great you admire and don’t shut up until everyone you know has read his or her works, and then you can start building off them.

There is a constant debate between web developers and native application developers on which platform is “better”, where, as you might expect, the definition of “better” varies greatly depending on your perspective.

Native app developers believe their software is better because they have more integration with the host platform: they get access to the user’s computer and things like drag and drop, or a tighter integration with the user’s information, like Calendars or Contacts. These applications also benefit from better performance, as the programs typically run natively, as opposed to being run interpreted by a web browser. Web applications will always be playing catch-up, according to some.

Web application developers believe their software is better because it can reach users on every platform and operating system. They don’t have to specify for only users of Macs or PCs or phones or tablets. Every user gets more or less the same experience. These applications also benefit from the nature of their environment: they actually exist running on controlled web servers instead of on the user’s local machine. The important consequence of this is software developers can rapidly change and improve the application without users having to take any action whatsoever. They simply visit the page and they’re always viewing the most recent version of the application.

I’ve been a native application developer for many years now and I’ve always preferred it for the aforementioned reasons, but lately I’ve been starting to see more of its flaws and fewer of its benefits. I’ve been looking at how human creativity works, and more importantly, what impedes it. And the common thread I’ve seen in all this research is a delay in seeing results of creation seriously impedes that creation.

That statement is true at all levels, from the way the code works all the way up to how a person uses the software. From the bottom, most modern web application software is written using dynamic languages, like Ruby or Python on the back-end to Javascript on the front end. The benefit of a dynamic programming environment is that changes can be made, and more importantly, reflected, at a much quicker pace than more static programming languages. Anyone who’s made test changes in Webkit’s “Javascript console” knows the benefits of having a REPL to play with application code. Changes can be tested while the code is running. Until recently, this wasn’t even possible on native iOS applications.

The more important benefit is however at a higher level. As a developer, there’s no impediment to getting new versions of my code out to users. I simply write the code, and when it’s been tested enough, I can deploy the fixes to my users. They don’t have to update anything, they just always get the most recent version of my application. Github illustrates this wonderfully, as they ship new code on the daily. “What version of Github are you using?” The current version.

When one of the customer support people came to me with a report of a bug in the editor, I would load the code into the Lisp interpreter and log into the user’s account. If I was able to reproduce the bug I’d get an actual break loop, telling me exactly what was going wrong. Often I could fix the code and release a fix right away. And when I say right away, I mean while the user was still on the phone.

In the old days, computer programs were written on punch cards which were fed into the computer, tediously, for the machine to execute them. It wasn’t until hours later when the results of the program were printed back out to the programmer. There was a big delay between the programmer writing code and there being a solution to his or her problem. How barbaric.

These days, there’s a smaller delay between the programmer writing the code and seeing the result of the execution, but there’s still an immense delay, for native applications, between when the programmer writes the code and when the user sees the result. Our native applications are still shipped as though they’re printed onto some physical artifact, which must be moved through space — at the expense of time — to a customer. This was necessary for punch cards and it was necessary for floppies and CD-ROMS, but it’s no longer true in the age of the internet.

Shipping native applications, even in the best case, is almost always a slow process. There are long development cycles with tons of testing needed before the application can be shipped. And then, there’s a struggle to get users to update their applications to the latest version.

I think there are a few reasons why users don’t update their native applications:

Because updates ship so infrequently, they usually involve many changes which break things.

Because it’s tedious, mechanical, shit work they probably shouldn’t be doing. It should just be done for them.

Because even if they wanted to, they often don’t know how.

I think #1 is the biggest culprit for those in the know. Experienced users have unfortunately experienced many poor upgrade experiences. But the experiences are so poor because the updates were so big and contained so many changes. And the updates were so big because users so infrequently update their software. It’s a vicious cycle and it needs to be broken.

The problem gets compounded when working with Apple’s App Stores, where even if developers wanted to ship on a regular basis, they have absolutely no power to do so. Instead, they’ve got to wait usually a week or more between shipping code and people being able to use it. Not only that, but while they’re waiting, they can’t ship any incremental changes lest they have to start the waiting period all over again. It really sucks.

I’m not entirely sold on web development as the one true way forward, but I do admit I admire many of the benefits such an environment provides. I want native development to learn its lessons. I want to ship software as frequently as I can. I want my users to feel like the users of Chrome or Chocolat, applications whose updates happen so frequently it’s basically invisible. If we could update native applications multiple times per week, it would become the norm. Update problems would be reduced because changes would be smaller and bugs would be easier to track down. And users would benefit most of all because they’d no longer be required to do anything — they’d just always get the best software.

In the movie, when Tom Cruise straps on his infogloves and starts rummaging through the dreams of the psychic precogs, classical music begins to play. He stands in front of a semicircular computer screen, the size of a wall, and uses his hands to fast forward and rewind, to zoom in and out and rotate the screen. Many of them are laughable—he places one hand in front of another to zoom in, like a vertical hand jive. He goes to shake someone’s hand and all his files are thrown down into the corner. It’s, frankly, absurd—especially if you haven’t seen it since 2002. THIS is the thing tech reviewers are always comparing a new interface to? Even so, there are recognizable gestures that anyone with an iPhone has used. The pinch-zoom, the rotation, and the swipe-to-dismiss are all used daily by smartphone users. And while Cruise’s begloved gesticulation is silly on its face, everyone else in the movie has to use a regular old multitouch computer monitor.

This is annoying on its own, but:

In 2006, a year before the iPhone’s debut, Jeff Han gave a TED Talk about multitouch gestures, demonstrating the use of them to manipulate photos and globes. Throughout, he described gestures as an “interfaceless” technology, a way to intuitively zoom in and out and rotate around images without a “magnifying glass tool.” This is, of course, nonsense. While touching something to get more info may be intuitive, every other gesture demonstrated is noteworthy for how NON-instinctive it is. Does pressing with one hand and dragging with another really intuitively represent rotation? Especially of a 3D object, like a globe?

The press gets so caught up in whiz-bang “innovation” that we’re left with magically-shitty interfaces which are even more confusing than a keyboard and mouse. Where gestures are used constantly without merit and nobody knows how to use anything.

What Bret really did was create a new grammar for data visualization, a new set of nouns, actions, and rules that allow you to express graphical representations in terms of geometry.

A natural editor is an editor that allows you to work with the grammar in terms of the end product. It is an environment that allows for “Direct Manipulation” - you don’t edit symbolic representations that ultimately turn into the output, you manipulate the end product itself.

That second paragraph is really, really important. I believe it’s a fundamental issue truly holding back software development. We don’t work in symbols or artifacts, we work on instructions which become symbols or artifacts. That’s pretty uncommon in the world of creation, and I think it’s a massive deficit which must be overcome.

The basic premise of the idea is, I wish iOS provided an API for applications to submit to the OS a network request (like an HTTP GET or POST) which would be executed by the OS at the next available chance.

This is for times when the device is without internet access, like while riding the subway, but when the user still wants the action to happen. It would tell iOS “When we get network access again, I want you to do this request”, and obviously it should return the response to the application when it does (or on next launch).

An example of this would be using an article posting app on the subway. I might write a nice article on my phone while underground riding the subway, and I press “Post”. Because there is currently no internet access, the app hands off this network request to the OS to be executed at the next available time. I can then safely quit my article posting app, and know that when I get off the subway and my device gets internet access, that my article will finally be posted. When my app launches next time, it’ll get an NSData of the response of that network request.

The addition of one more multitasking service would solve this issue for a lot of application types: a periodic network request. Here’s how I would do it:

The application gives the system an NSURLRequest and an ideal refresh interval, such as every 30 minutes, every few hours, or every day.

iOS executes that request, whenever it deems that it should, and saves the response to a local file.

Next time the application launches, iOS hands it an NSData of the most recent response.

The two would be welcome on iOS and compliment each other very well. In short, I feel like the multitasking offerings on iOS are still lacking, and the OS often doesn’t reflect what people actually want to do with their devices. By enabling such an API, it would enable people to do more.

Update

One thing I forgot to remember while writing this post was a potential implementation of it could be sort of be done today, but it would be a hack (thanks to Craig Stanford for reminding me about this).

You could do this by enabling “Significant Location Updates” in your application, and then trying to perform the network activity then. With these enabled, iOS will launch your app even if it’s been quit to tell you the device has moved to a new location (typically this is a granularity of a neigbourhood or so). So when the device moves, you get a chance to execute code, and this could include network activity. Instapaper, among others, has a feature to do this, but again, it’s a hack.

The first thing you’ll notice is that you probably won’t notice anything at all. The website looks and works identically to how it’s always worked. For you, the reader. But for me, the author, the website has gone under a massive overhaul and has been re-written from the ground up to support all kinds of new goodies. Gone are the days of the jury-rig known as Colophon 1, the old website software I’d written to power this website.

Instead, with Colophon 2 I’ve got a proper web interface. No more publishing articles over git. Colophon 2 has a proper REST API, including a built in article editor (with autosave). I’m currently writing this article directly in the browser without worry of it crashing. And I can start writing on my computer and finish up on my iPad. Or I can post links directly from my iPhone. This is something I could never do with the original version of Colophon.

Of course, I always could have gone with a pre-existing publishing software, but none of the web publishing services I’ve looked at have really met my needs. And hell, it’s just fun to hack away on something outside of my typical domain. The app is in really great shape and I’m not entirely sure what to do with it. I could open source it, but I feel like it would be a waste of the good, hard work I’ve put into it. And then, of course, I could sell it, but I’m not sure there really needs to be yet-another-publishing-service. Suggestions are welcome.

On a more personal note

You may have have noticed a distinct lack of articles here in the last month or so and I’m happy to say it’s all been for a really good reason, and even happier to say the lapse should now be over.

I’ve just moved from Ottawa to New York City and started a new job as an iOS Developer at the New York Times. Working for Shopify was an incredible experience, and I was sad to leave, but I’m even more excited for this new stage in my life. I’m a Canadian living in the US, and I’m exploring a new city. It’s equal parts exhilarating and terrifying but I couldn’t be more excited for it.

CP: Yes, I do. But one of things that interests me about the game is that you have these semi-autonomous characters. They’re not totally autonomous, and they’re not totally avatars either. They’re somewhere in between. Do think that’s disorienting to the player, or do you think it’s what makes the game fun?

WW: I don’t think so. I mean it’s interesting. I’m just surprised that people can do that fluidly, they can so fluidly say “Oh, I’m this guy, and then I’m going to do x, y, and z.” And then they can pop out and “Now I’m that person. I’m doing this that and the other. What’s he doing?” And so now he’s a third person to me, even though he was me a moment ago. I think that’s something we use a lot in our imaginations when we’re modeling things. We’ll put ourselves in somebody else’s point of view very specifically for a very short period of time. “Well, let’s see, if I were that person, I would probably do x, y, and z.” And then I kind of jump out of their head and then I’m me, talking to them, relating to them.

On SimHealth, an example of a tool for sharing a mental model amongst many people, a powerful concept:

[WW:] We did a project actually several years ago called Sim Health for the Markle Foundation in New York. It was a simulation of the national healthcare system, but underneath the whole thing, the assumptions of the model were exposed. And you could change your assumptions, for example, as to how many nurses it takes to staff a hospital, or how many emergency room visits you would have given certain parameters, etc., etc. The idea was that people could kind of argue over policy but eventually that argument would come down to the assumptions of the model. And this was a tool for them to actually get into the assumptions of the model. When people disagree over what policy we should be following, the disagreement flows out of a disagreement about their model of the world. The idea was that if people could come to a shared understanding or at least agree toward the model of the world, then they would be much more in agreement about the policy we should take.

A humourous example of a shared model:

WW: In Go, both players have a model of what’s happening on the board, and over time those models get closer and closer and closer together until the final score. At that point you have a total shared model of, you know, “you beat me.” (Laughter.) Up until that point, though, there’s quite a large divergence in the mental models that players have. Especially if you ask them what the score is, or “How are you doing?” They’ll frequently say, “I’m doing pretty well, here,” or “He’s whipping me.” Or that backwards thing, “Oh, he’s whipping me,” when really you’re the one winning. And it really comes down to how each person is mentally overlaying their territories onto this board. In each player’s mind, there’s this idea that “Oh, I control this and they control that, and we’re fighting over this.” They each have a map in their head of what’s going on, and those maps are in disagreement. And it’s those areas of maximum disagreement where the battles are all fought. You play a piece there, and I think “Oh, that’s in my territory, I’m going to attack it cause you’re in my territory.” Whereas you’re thinking, “Oh, that’s my territory, you’re invading me.” And finally, the battle resolves that in our heads, and then it’s pretty clear that, “Okay, that’s your territory and that’s mine.” So the game is in fact this process of us bringing our different mental models into agreement. Through battle.

And finally, something intriguing I’m not sure they ever shipped:

WW: (Laughs.) Yes. I’m trying to basically chronicle the average model that the players have made in their heads. It’s like cultural anthropology. Already it's having a huge impact on what we do with our expansion packs and the next version of The Sims. We’re getting a sense of when people like to play the house building game vs. the relationship game, and what types of families they like to create, what objects they like the most. Eventually, in the not too distant future, we’re working towards having this be dynamic on a daily basis so the game in some sense can be self-tuning to each individual player based on what they’ve done in the game. That’s what I think is going to be really interesting slash kind of scary[sic]. Because I can see a really clear path to getting there. You look at what a million people have done the day before in a game, have all that information sent up to your server, do some heavy data analysis, and then every day send back to all these games each with its own new tuning set.

CP: So this would be The Sims Online where everything is going on at the server level as opposed to individual machines.

WW: No, this could be for just the next version of The Sims.

CP: As long as you have a way of collecting the data from the people.

WW: Right, and they could easily opt out if they want to turn it off. But for the most part they could still be playing a single player game, it’s just that every time they boot it up it goes to our server and asks for the new tuning set. And when they finish playing every day it sends back the results of what they did. So they’re still playing a single-player game, but it’s individually tuning itself to each player. You know based on your preferences, but also based on the parallel learning of a million other people. So you might discover things. Or somebody might actually initiate a sequence of actions on their computer in a very creative way and the computer might recognize that, send it up to the server, and say: “Wow, that was an interesting sequence, and that person likes doing comedy romances. Let’s try that on ten other people tomorrow. If those ten people respond well, let’s try it on a hundred the next day.” So it could be that the things aren’t just randomly discovered, but they’re also observed from what the players did specifically.

Versu is an interactive storytelling platform that builds experiences around characters and social interaction. Each story sets out a premise and some possible outcomes. As a player, you get to select a character, guide their choices, watch other characters react to what you've chosen, and accomplish (or fail at) your chosen goals.

Watch the video and imagine the kinds of stories you'd create. Who says the book is dead? It looks more alive than ever.

Over the past few years, Apple has revolutionized how people use technology. App developers have access to an exciting ecosystem that continues to grow at an enormous rate. More than ever, we as designers, developers, and business leaders have the tools available to change the world.

Our goal is to bring together experts in a variety of important topics for three days to broaden your horizons, make you think differently, network with fellow devs and designers all while having a great time.

If you’re an iOS or Mac developer, you really ought to buy a ticket and do so quickly. The conference runs April 19-21, 2013 and the lineup looks fantastic (if I do say so myself).

Sex evolved because the benefit of the diversity created through the intermixture of genomes outweighed the costs of engaging in it, and so we enjoy exchanging our genes with one another, and life is all the richer for it. Likewise ideas. “Exchange is to cultural evolution as sex is to biological evolution,” [zoologist Matt] Ridley writes, and “the more human beings diversified as consumers and specialized as producers, and the more they then exchanged, the better off they have been, are and will be. And the good news is that there is no inevitable end to this process. The more people are drawn into the global division of labour, the more people can specialize and exchange, the wealthier we will all be.”

I was discussing a similar idea tonight with a coworker. It’s one thing to bring brilliant people together, but it’s quite a great deal better if they are exchanging ideas. If companies had more regular discussion of ideas, both internally and with other companies, everyone could reap the benefits.

See also Matt Ridley’s excellent and convincing TED Talk on the same subject.

Last night, while catching up on some old articles in my RSS feeds, I read a quote by Buzz Anderson:

The programmer, who needs clarity, who must talk all day to a machine that demands declarations, hunkers down into a low-grade annoyance. It is here that the stereotype of the programmer, sitting in a dim room, growling from behind Coke cans, has its origins. The disorder of the desk, the floor; the yellow Post-it notes everywhere; the whiteboards covered with scrawl: all this is the outward manifestation of the messiness of human thought. The messiness cannot go into the program; it piles up around the programmer.

Readmill is kind of like Goodreads, except it looks much nicer, more modern and has an emphasis on sharing passages from the books you’re reading with your friends:

Readmill is a curious community of readers, highlighting and sharing the books they love.

We believe reading should be an open and easily shareable experience. We built Readmill to help fix the somewhat broken world of ebooks, and create the best reading experience imaginable. Readmill launched in December 2011 with a small dedicated team from all over Europe. We are based in Berlin.

From there, I discovered Born Hungry Magazine, a yummy-looking cooking and eating website which she founded and contributes to. It describes itself as:

an online magazine about why we cook and the curiosity that drives us. With every feature and recipe, we want to celebrate and encourage home cooks.

We believe everyone can make a delicious meal (or cocktail, as you do). We’re a bunch of inquisitives: roasting, pickling, tasting, and sharing. And we want to publish things in our slow, quiet way to inspire you to do the same.

She also posted, around the time I started writing this article, a link to her page on The Pastry Box Project, a website which shares daily thoughts from a roster of thirty writers, one per day for a whole year:

Each year, The Pastry Box Project gathers 30 people who are each influential in their field and asks them to share thoughts regarding what they do. Those thoughts are then be published every day throughout the year at a rate of one per day, starting January 1st and ending December 31st. 2013’s topic is “Shaping The Web”.

The night following Ethan’s email coincided with a Madmen marathon. This show, probably one of the most subtle and well written ever aired to this day, often got me thinking about how interesting it would be to have direct access to the thoughts of 1960s ad executives, about their jobs, and what they were doing. Those people were simply defining a large portion of what their day and age was becoming (whether for good or bad, or worse) and I wanted to know if they were fully aware of the extent to which they were helping to shape the daily experience of millions of people, and, if so, how they felt about it. I had read some memoirs and some interviews, but those weren’t the raw material I was looking for, the right-now-in-the-heat kind of thinking.

Later, before falling asleep, thoughts of new projects, Madmen, and browsers being resized (I had spent a fair amount of the day testing the site) all mixed together.

And the Pastry Box Project took shape. Almost discreetly.

I realized I could gather the material I dreamed of while watching Madmen. I simply had to ask people to share their thoughts about their work, the industries they’re developing in, and themselves.

Sometimes I get the feeling like I’m missing the vast majority of the interesting content on the Web, and then I have days like today where that thought is confirmed. Here’s to discovery.

I’ve started watching Star Trek: The Next Generation from the beginning again. Instead of just enjoying it as a nostalgic trip through my childhood, I’ve been trying to actively watch what’s happening in the show. I’ve noticed a number of personality and behaviour patterns that totally clash with how I remember each character as a child. Bear with me as I review what the first half of STNG’s first season was like a second time around.

Objective-C in the Cocoa and Cocoa touch environment has always had one particular source of plight for newcomers in the realm of memory management. In the olden days, we Cocoa programmers had Reference Counting, a form of manual memory management, and though the rules are simple, they were also hard to master and easy to screw up. After a brief and half-hearted stint at using fully managed memory in the form of OS X’s Garbage Collector, Apple has now deprecated the technology (which they never could get running well enough on iOS).

These days, we have Automatic Reference Counting (ARC) on both platforms, which is somewhere in between. In this article, I will explain the fundamentals of Cocoa memory management, what you must know and what can be left to the tools.

Smells like Garbage Collection

At first glance, ARC seems an awful lot like Garbage Collection: it is automatic after all. But despite that specious assumption, ARC is in fact quite different from GC. ARC is a compile time technology, which means there is no collector running with your app’s process, and this saves on performance. But it also means ARC isn’t as capable as a full garbage collector.

The hint is in the name. If you look closely, it’s Automatic Reference Counting, not Automatic Memory Management.

In essence, this meansARC doesn’t relieve you the programmer from knowing the memory management rules, it only relieves you from writing memory management code. It doesn’t mean ARC is hard, it just means you have still have to pay a little bit of attention instead of letting the system do all the work (as it ought to do). ARC is a compromise.

Ownership

The key and fundamental principle of Cocoa’s memory management rules is and always has been about Ownership. Learn this principle and learn it well and ARC and Manual Reference Counting will make absolute and perfect sense.

Instead of a system process exploring the runtime’s object graph, Cocoa’s reference counting system relies on a compile time Ownership model to determine the lifetime of objects at runtime. It can be expressed in three simple axioms:

An object will exist in memory so long (but no longer) as at least one object maintains ownership of it.

To keep an object, you must take ownership of it. If you are done with an object, you must relinquish ownership of it.

More than one object may share ownership of a given child object.

Ownership is acquired in one of the following ways:

By allocating an object in memory by using any methods starting with -new, -alloc, or -copy.

By requesting ownership of the object.

In ARC, this is by assignment to a strong property or variable (object instance variables default to strong ownership under ARC, as they ought to).

In MRC, this is by assignment to a retain property or by by sending the object the retain message (object instance variables don’t do any of this for you under Manual Memory Management, so if you wish to retain the object when setting it to an instance variable, be sure to send it retain).

If you do neither of these things, you don’t own the object and you should treat it as though it will go away after the scope in which it’s being used. That means you don’t have to do anything special to keep an object around for the duration of a method block, but unless you hang on to it explicitly, it will go away.

To relinquish ownership, all you have to do is remove the strong/retain reference to it (by nil-ing out the property) in ARC or MRC. Or by sending the object a release or autorelease message, in MRC only.

When and When Not

After mastering the concept of Ownership, the rest just falls naturally into place. To repeat from above, If you don’t claim ownership of an object, you can’t expect it to be around for any longer than its current scope. That’s the API contract you make with Cocoa’s memory management rules, whether ARC or MRC.

With that knowledge, you can thankfully let ARC take care of most of the rest (with one exception, as we’ll see later). Some examples of when you the programmer need to do work, and when you don’t:

- (void)someMethod {
id localObject = [SomeClass new]; // creates ownership, but only for the method’s scope
// ... do your stuff with localObject
return; // ARC automatically relinquishes ownership of localObject for us, because our object didn’t take Ownership
}
- (void)setupState {
// In the below case, we assign the object to a strong property, thus taking ownership.
// Even though the -new method also comes with ownership, it’s local like the above example, so we get the intended behaviour of a single ownership. ARC figures it out for us.
self.instanceProperty = [SomeClass new];
// ... etc.
}
- (void)addToStateArray:(id)otherObject {
[self.arrayProperty addObject:otherObject];
// In this case, we don’t directly claim any ownership because our Array does that for us.
}

Where ARC is weak compared to GC

Garbage Collection provides the programmer with the contract that it will take complete control over managing memory in the application process, whereas ARC only makes the claim of relieving the programmer of writing ownership machinery. This means, unlike GC, ARC does not fully manage every aspect of process memory. Most importantly for Cocoa developers, this means ARC cannot break ownership cycles.

A cycle occurs when a parent object claims ownership of a child object who either likewise claims ownership to the parent or owns a descendent who claims ownership to the parent. It might look like the following, where -> means ownership:

A -> B -> C -> A

In such a scenario, keeping in mind the first and second axioms of Cocoa memory management, object A can never never be deallocated because there is an ownership cycle. C owns A, but can’t be deallocated because B owns C. And B can’t be deallocated because A owns B. And so on. To solve this, the programmer must take responsibility and use a weak reference.

A weak reference is just that: a reference to an object without claiming ownership of it. These also have the nice benefit of automatically being set to point to nil when the object at the other end disappears (I’m with Wolf Rentzsch on this one: if the runtime is capable of this, why didn’t they just go all the way and do real GC?). Here’s an example of solving an ownership cycle with weak, from the parent:

If the Child class had a property that wasn’t denoted as weak, then it we would have an ownership cycle, but with weak, we can have a healthy object graph devoid of leaks or cycles.

ARC is a compromise

Cocoa memory management has always been a source of consternation for newcomers. Even though ARC aimed to solve that by taking more control over memory management, it’s not a full solution like Garbage Collection. In order to master it, you still must master the above concepts. But by internalizing the principles of Cocoa’s memory management, ARC takes care of the rest.

I helped someone solve this tonight. Since DailyBooth.com shut down at the end of 2012, you can’t get at any photos. But if you needed to recover some here’s how (this trick works as of January 2, 2013):

Visit http://m.dailybooth.com/USER_NAME (except put your user name in the URL) and you should be able to see all your photos and grab them. I don’t know how long this trick will last, but that’s how I did it.

Also, some of the photos seemed to not want to download at first, so you need to them in Chrome, open the image in its own tab, and then use Save As to get the image to save. I don’t know why.

Every single thing in my life that has made me truly happy — I only got there by trusting myself and ignoring everyone else, even when it seemed insane. I can’t tell you how many times this has paid off for me. It often pays off immediately, within an hour. If I just trust what feels right, everything seems to fall into place magically.

My friend Ash Furrow recently published an article entitled “Seven Deadly Sins of Modern Objective C” in which he lists grievances committed by programmers new and experienced alike who use outdated or incorrect methods of Objective C development. This article struck a chord with me, but not for good reasons.

The article begins with the bellicose proclamation:

If you code Objective-C, this is going to offend you and that’s good. If you aren’t offended, then you don’t care, and that’s bad.

I disagree with both statements and the conclusion. A list of common incorrect or outdated patterns of Objective C should make for an enlightening and educational read — it should not be looking to pick a fight. The original version of the article, which was painfully and profusely peppered with profanity has since been revised with less reviling language, but the harangue remains otherwise intact.

The so-called “sins” are hardly egregious, and few of them relate directly to Objective C anyway. Properly ordered, they are:

Giant .xib files. These are interface files and not part of the language.

Nonetheless, many novices will use this technique, which will use too much memory and suffer from slow loading times when the xib is read in. Experienced Cocoa developers know this already, but new programmers are probably not aware creating different views in the same file is hazardous. However, explaining this in an “offensive” way doesn’t help anyone.

Not Using Dot Syntax. This sin has to do with a syntax introduced with declared properties in the Objective C language. He writes:

Now that we’ve covered a sin common with newbies, let’s tackle one that’s common with Objective-C greybeards.

Tossing out “Greybeards” is rarely a successful method of much more than grabbing the attention of a developer who likely knows a lot about a programming language, and it sure doesn’t encourage said programmer to pay much attention to any forthcoming arguments.

Get with it, old timers! Dot syntax isn’t just The Way Of The Future, but it has other benefits, too, like not alienating all your peers and great compatibility with ARC (what’s that? You don’t use ARC? Jesus…).

Again, this does nothing but inflame when instead the intent should be to inform. Just telling an experienced programmer this is “wrong” is a dogmatic solution. Any experienced programmer will immediately respond with that lovely three-letter word we need to hear more of: “Why?” The article provides no answer for that question.

I agree with using dot-syntax for properties (which I’m assuming Ash is also advocating, as opposed to using dot-syntax for any old method), and when I explain this to other developers, I use Brent Simmons’ line of reasoning:

Yes, I know it doesn’t matter to the compiled code, but I like having the conceptual difference, and the syntax reinforces that difference.

And while you might not like dot notation — or you might love it and want to use it for things like count that are not properties — I ask you to remember that cool thing about Cocoa where we care about readability and common conventions. […]

Say I’m searching a .m file to find out where a UIImageView gets its image set. Knowing that image is a property, I search on .image = to find out where it gets set.

If I find nothing I start to freak out because it doesn’t make any sense. […] I know that image view displays an image, and I know the image is set somewhere in that file — and I can’t figure out where.

And then, after wasting time and brain cells, I remember to search on setImage:. There it is.

Brent says not only what he thinks you should do, but why he thinks you should do it too.

Too Many Classes in .m Files. This one is pretty straightforward but I also feel like this isn’t something a beginner will do too often, as the standard Xcode behaviour is to generate two files (*sigh*) for every class. It’s possible that this is a common beginner problem but I haven’t witnessed it. Either way, Ash is right here. Jamming extra classes in an implementation file is not OK unless those classes are helper classes to the eponymous file class.

Not Testing With Compiler Optimizations. Agreed, but I don’t feel like this is a common problem because in the general case, you develop under the assumption these optimizations are not introducing bugs. That’s the agreement you make with the compiler. But if you’re getting crashes in your released version, this is a great place to look for mistakes.

Architecture-Dependent Primitive Types. The simple truth is, despite all the stylistic implication, the modern Cocoa APIs use these types, so if you don’t use these types you’re risking a loss of precision and you’re guaranteeing yourself a loss of abstraction.

Unnecessarily C APIs. This sounds like more of an issue with Apple’s code (and I agree) than it is with other third-party code.