The nation I live in has decided to impose sanctions on itself. The government has yet to figure out the exact details. It won’t be good.

Today marks the day that the ironically-named Kingdom of Great Britain and Northern Ireland officially leaves the European Union. Nothing will change on a day to day basis (until the end of this year, when the shit really hits the fan).

A prescriptive approach might give you a beautiful design system, but if it doesn’t reflect the actual product, it’s fiction. A descriptive approach might give a design system with imperfections and annoying flaws, but at least it will be accurate.

I think it’s more important for a design system to be accurate than beautiful.

In interviews, Martin has compared himself to a gardener—forgoing detailed outlines and overly planned plot points to favor ideas and opportunities that spring up in the writing process. You see what grows as you write, then tend to it, nurture it. Each tendrilly digression may turn into the next big branch of your story. This feels right: good things grow, and an important quality of growth is that the significant moments are often unanticipated.

On the other side of writing is who I’ll call “the architect”—one who writes detailed outlines for plots and believes in the necessity of overt structure. It puts stock in planning and foresight. Architectural writing favors divisions and subdivisions, then subdivisions of the subdivisions. It depends on people’s ability to move forward by breaking big things down into smaller things with increasing detail.

It’s not just me, right? It all sounds very design systemsy, doesn’t it?

This is a false dichotomy, of course, but everyone favors one mode of working over the other. It’s a matter of personality, from what I can tell.

Replace “personality” with “company culture” and I think you’ve got an interesting analysis of the two different approaches to design systems. Descriptivist gardening and prescriptivist architecture.

Frank also says something that I think resonates with the evergreen debate about whether design systems stifle creativity:

It can be hard to stay interested if it feels like you’re painting by numbers, even if they are your own numbers.

I think Frank’s comparison—gardeners and architects—also speaks to something bigger than design systems…

I gave a talk last year called Building. You can watch it, listen to it, or read the transcript if you like. The talk is about language (sort of). There’s nothing about prescriptivism or descriptivism in there, but there’s lots about metaphors. I dive into the metaphors we use to describe our work and ourselves: builders, engineers, and architects.

It’s rare to find job titles like software gardener, or information librarian (even though they would be just as valid as other terms we’ve made up like software engineer or information architect). Outside of the context of open source projects, we don’t talk much about maintenance. We’re much more likely to talk about making.

In her book The Real World of Technology, the metallurgist Ursula Franklin contrasts prescriptive technologies, where many individuals produce components of the whole (think about Adam Smith’s pin factory), with holistic technologies, where the creator controls and understands the process from start to finish.

(Emphasis mine.)

In that light, design systems take their place in a long history of dehumanising approaches to manufacturing like Taylorism. The priorities of “scientific management” are the same as those of design systems—increasing efficiency and enforcing consistency.

Humans aren’t always great at efficiency and consistency, but machines are. Automation increases efficiency and consistency, sacrificing messy humanity along the way:

Machine with the strength of a hundred menCan’t feed and clothe my children.

Historically, we’ve seen automation in terms of physical labour—dock workers, factory workers, truck drivers. As far as I know, none of those workers participated in the creation of their mechanical successors. But when it comes to our work on the web, we’re positively eager to create the systems to make us redundant.

The usual response to this is the one given to other examples of automation: you’ll be free to spend your time in a more meaningful way. With a design system in place, you’ll be freed from the drudgery of manual labour. Instead, you can spend your time doing more important work …like maintaining the design system.

You’ve heard the joke about the factory of the future, right? The factory of the future will have just two living things in it: one worker and one dog. The worker is there to feed the dog. The dog is there to bite the worker if he touches anything.

The web is far from perfect, but I think we underrate how resilient it can be.

If you thought maintaining a web project was hard, just wait till you try keeping an app in the app store…

Just before the 2019 holidays, I received an email from Apple notifying me that the app “does not follow one or more of the App Store Review Guidelines.” I signed in to Apple’s Resource Center, where it elaborated that the app had gone too long without an update. There were no greater specifics, no broken rules or deprecated dependencies, they just wanted some sort of update to prove that it was still being maintained or they’d pull the app from the store in December.

The sentiment about “engine diversity” points to a growing mindset among (primarily) Google employees that are involved with the Chromium project that puts an emphasis on getting new features into Chromium as a much higher priority than working with other implementations.

Needless to say, I agree with this:

Proponents of a “move fast and break things” approach to the web tend to defend their approach as defending the web from the dominance of native applications. I absolutely think that situation would be worse right now if it weren’t for the pressure for wide review that multiple implementations has put on the web.

The web’s key differentiator is that it is a part of the commons and that it is multi-stakeholder in nature.

Like Bastian, I’m making a concerted effort now to fly less—offsetting the flights I do take—and to take the train instead. Here’s a description of a train journey to Nottingham for New Adventures, all the way from Germany.

Years ago, the world of web standards was split. Two groups—the W3C and the WHATWG—were working on the next iteration of HTML. They had different ideas about the nature of standardisation.

Broadly speaking, the W3C followed a specification-first approach. Figure out what should be implemented first and foremost. From this perspective, specs can be seen as blueprints for browsers to work from.

The WHATWG, by contrast, were implementation led. The way they saw it, there was no point specifying something if browsers weren’t going to implement it. Instead, specs are there to document existing behaviour in browsers.

I’m over-generalising somewhat in my descriptions there, but the point is that there was an ideological difference of opinion around what standards bodies should do.

This always reminded me of a similar ideological conflict when it comes to language usage.

Language prescriptivists attempt to define rules about what’s right or right or wrong in a language. Rules like “never end a sentence with a preposition.” Prescriptivists are generally fighting a losing battle and spend most of their time bemoaning the decline of their language because people aren’t following the rules.

Language descriptivists work the exact opposite way. They see their job as documenting existing language usage instead of defining it. Lexicographers—like Merriam-Webster or the Oxford English Dictionary—receive complaints from angry prescriptivists when dictionaries document usage like “literally” meaning “figuratively”.

Dictionaries are descriptive, not prescriptive.

I’ve seen the prescriptive/descriptive divide somewhere else too. I’ve seen it in the world of design systems.

There appears to be two competing approaches in designing design systems.

An intentional design system. The flavour and framework may vary, but the approach generally consists of: design system first → design/build solutions.

An emergent design system. This approach is much closer to the user needs end of the scale by beginning with creative solutions before deriving patterns and systems (i.e the system emerges from real, coded scenarios).

An intentional design system is prescriptive. An emergent design system is descriptive.

I think we can learn from the worlds of web standards and dictionaries here. A prescriptive approach might give you a beautiful design system, but if it doesn’t reflect the actual product, it’s fiction. A descriptive approach might give a design system with imperfections and annoying flaws, but at least it will be accurate.

I think it’s more important for a design system to be accurate than beautiful.

Read fiction. I don’t mean “read sf to have ideas about the future.” I mean “read any form of fiction, genre or no”. Fiction allows us to have other ideas, live other lives, see other perspectives. It allows us to escape and re-consider the world from outside ourselves. It allows us to think at lengths and timescales that we may not from day-to-day. It is a shortcut to containing multitudes; to other minds.

I feel there is something beyond the technological that is the real trick to a site that lasts: you need to have some stake in the game. You don’t let your URLs die because you don’t want them to. They matter to you. You’ll tend to them if you have to. They benefit you in some way, so you’re incentivized to keep them around. That’s what makes a page last.

It is fortifying to remember that the very idea of artificial intelligence was conceived by one of the more unquantifiably original minds of the twentieth century. It is hard to imagine a computer being able to do what Alan Turing did.

But I was chatting to Amber the other day, and I mentioned how I can see the theoretical justification for Microsoft’s decision …even if I don’t quite buy it myself.

Picture, if you will, something I’ll call the bar of unity. It’s a measurement of how much collaboration is happening between browser makers.

In the early days of the web, the bar of unity was very low indeed. The two main browser vendors—Microsoft and Netscape—not only weren’t collaborating, they were actively splintering the languages of the web. One of them would invent a new HTML element, and the other would invent a completely different element to do the same thing (remember abbr and acronym). One of them would come up with one model for interacting with a document through JavaScript, and the other would come up with a completely different model to the same thing (remember document.all and document.layers).

There wasn’t enough collaboration. Our collective anger at this situation led directly to the creation of The Web Standards Project.

Eventually, those companies did start collaborating on standards at the W3C. The bar of unity was raised.

This has been the situation for most of the web’s history. Different browser makers agreed on standards, but went their own separate ways on implementation. That’s where they drew the line.

Now that line is being redrawn. The bar of unity is being raised. Now, a number of separate browser makers—Google, Samsung, Microsoft—not only collaborate on standards but also on implementation, sharing a codebase.

The bar of unity isn’t right at the top. Browsers can still differentiate in their user interfaces. Edge, for example, can—and does—offer very sensible defaults for blocking trackers. That’s much harder for Chrome to do, given that Google are amongst the worst offenders.

So these browsers are still competing, but the competition is no longer happening at the level of the rendering engine.

I can see how this looks like a positive development. In fact, from this point of view, Mozilla are getting in the way of progress by having a separate codebase (yes, this is a genuinely-held opinion by some people).

On the face of it, more unity sounds good. It sounds like more collaboration. More cooperation.

But then I think of situations where complete unity isn’t necessarily a good thing. Take political systems, for example. If you have hundreds of different political parties, that’s not ideal. But if you only have one political party, that’s very bad indeed!

There’s a sweet spot somewhere in between where there’s a base of level of agreement and cooperation, but there’s also plenty of room for disagreement and opposition. Right now, the browser landscape is just about still in that sweet spot. It’s like a two-party system where one party has a crushing majority. Checks and balances exist, but they’re in peril.

Firefox is one of the last remaining representatives offering an alternative. The least we can do is support it.

Everyone is busy building stuff for right now, today, rarely for tomorrow. But it would be nice to also have stuff that lasts a little longer than that.

I just got a new laptop and I decided to go with fresh installs rather than a migration. This really resonates:

It just seems that nobody is interested in building quality, fast, efficient, lasting, foundational stuff anymore. Even when efficient solutions have been known for ages, we still struggle with the same problems: package management, build systems, compilers, language design, IDEs.

I feel like my problem with design in general today is that folks want to burn everything to the ground and start again all the time. Whether that’s with a website, or a new web standard, or a political policy. They don’t want to fix what’s wrong with things bit by bit, everyone wants Thing 2.0 whilst jumping over all the small improvements that are required to get there.

Can you believe we used to willingly tell Google about every single visitor to basecamp.com by way of Google Analytics? Letting them collect every last byte of information possible through the spying eye of their tracking pixel. Ugh.

👏

In this new world, it feels like an obligation to make sure we’re not aiding and abetting those who seek to exploit our data. Those who hoard every little clue in order to piece of together a puzzle that’ll ultimately reveal all our weakest points and moments, then sell that picture to the highest bidder.

The divide between what you read in developer social media and what you see on web dev websites, blogs, and actual practice has never in my recollection been this wide. I’ve never before seen web dev social media and forum discourse so dominated by the US west coast enterprise tech company bubble, and I’ve been doing this for a couple of decades now.

Web dev driven by npm packages, frameworks, and bundling is to the field of web design what Java and C# in 2010s was to web servers. If you work in enterprise software it’s all you can see. Web developers working on CMS themes (or on Rails-based projects) using jQuery and plain old JS—maybe with a couple of libraries imported directly via a script tag—are the unseen dark matter of the web dev community.

The web app manifest—a JSON file of metadata—is particularly useful for describing how your site should behave if someone adds it to their home screen. You can specify what icon should be used. You can specify whether the site should launch in a browser or as a standalone app (practically indistinguishable from a native app). You can specify which URL on the site should be used as the starting point when the site is launched from the home screen.

So progressive web apps work just fine when you visit them in a browser, but they really shine when you add them to your home screen. It seems like pretty much everyone is in agreement that adding a progressive web app to your home screen shouldn’t be an onerous task. But how does the browser let the user know that it might be a good idea to “install” the web site they’re looking at?

The Samsung Internet browser does ambient badging—a + symbol shows up to indicate that a website can be installed. This is a great approach!

I hope that Chrome on Android will also use ambient badging at some point. To start with though, Chrome notified users that a site was installable by popping up a notification at the bottom of the screen. I think these might be called “toasts”.

Needless to say, the toast notification wasn’t very effective. That’s because we web designers and developers have spent years teaching people to immediately dismiss those notifications without even reading them. Accept our cookies! Sign up to our newsletter! Install our native app! Just about anything that’s user-hostile gets put in a notification (either a toast or an overlay) and shoved straight in the user’s face before they’ve even had time to start reading the content they came for in the first place. Users will then either:

turn around and leave, or

use muscle memory reach for that X in the corner of the notification.

A tiny fraction of users might actually click on the call to action, possibly by mistake.

Chrome didn’t abandon the toast notification for progressive web apps, but it did change when they would appear. Rather than the browser deciding when to show the prompt—usually when the user has just arrived on the site—a new JavaScript event called beforeinstallprompt can be used.

It’s a bit weird though. You have to “capture” the event that fires when the prompt would have normally been shown, subdue it, hold on to that event, and then re-release it when you think it should be shown (like when the user has completed a transaction, for example, and having your site on the home screen would genuinely be useful). That’s a lot of hoops. Here’s the code I use on The Session to only show the installation prompt to users who are logged in.

The end result is that the user is still shown a toast notification, but at least this time it’s the site owner who has decided when it will be shown. The Chrome team call this notification “the mini-info bar”, and Pete acknowledges that it’s not ideal:

The mini-infobar is an interim experience for Chrome on Android as we work towards creating a consistent experience across all platforms that includes an install button into the omnibox.

I think “an install button in the omnibox” means ambient badging in the browser interface, which would be great!

Anyway, back to that thread on Github. Basically, neither Apple nor Mozilla are going to implement the beforeinstallprompt event (well, technically Mozilla have implemented it but they’re not going to ship it). That’s fair enough. It’s an interim solution that’s not ideal for all reasons I’ve already covered.

But there’s a lot of pushback. Even if the details of beforeinstallprompt are troublesome, surely there should be some way for site owners to let users know that can—or should—install a progressive web app? As a site owner, I have a lot of sympathy for that viewpoint. But I also understand the security and usability issues that can arise from bad actors abusing this mechanism.

Still, I have to hand it to Chrome: even if we put the beforeinstallprompt event to one side, the browser still has a mechanism for letting users know that a progressive web app can be installed—the mini info bar. It’s not a great mechanism, but it’s better than nothing. Nothing is precisely what Firefox and Safari currently offer (though Firefox is experimenting with something).

In the case of Safari, not only do they not provide a mechanism for letting the user know that a site can be installed, but since the last iOS update, they’ve buried the “add to home screen” option even deeper in the “sharing sheet” (the list of options that comes up when you press the incomprehensible rectangle-with-arrow-emerging-from-it icon). You now have to scroll below the fold just to find the “add to home screen” option.

Except… there’s another interesting angle to that Github thread. There’s talk of allowing sites that are launched from the home screen to have access to more features than a site inside a web browser. Usually permissions on the web are explicitly granted or denied on a case-by-case basis: geolocation; notifications; camera access, etc. I think this is the first time I’ve heard of one action—adding to the home screen—being used as a proxy for implicitly granting more access. Very interesting. Although that idea seems to be roundly rejected here:

A key argument for using installation in this manner is that some APIs are simply so powerful that the drive-by web should not be able to ask for them. However, this document takes the position that installation alone as a restriction is undesirable.

If you end up with a draft of a short story or a few paragraphs of a typical UX interaction scenario, or a storyboard, or a little film of someone swiping on a screen to show how your App idea would work — you have not done Design Fiction.

What you’ve done is write a short story, which can only possibly be read as a short story.

What you should ideally produce is something a casual observer may mistake for a contemporary artefact, but which only reveals itself as a fiction on closer inspection. It should be very much “as if..” this thing really existed. It should feel real, normal, not some fantasy.

They shouldn’t be aspirational, they should be preventative … my suggestion for setting a budget for any trackable metric is to take the worst data point in the past two weeks and use that as your limit

The internet did not use a visual spatial metaphor. Despite being accessed through and often encompassed by the desktop environment, the internet felt well and truly placeless (or perhaps everywhere). Hyperlinks were wormholes through the spatial metaphor, allowing a user to skip laterally across directories stored on disparate servers, as well as horizontally, deep into a file system without having to access the intermediate steps. Multiple windows could be open to the same website at once, shattering the illusion of a “single file” that functioned as a piece of paper that only one person could hold. The icons that a user could arrange on the desktop didn’t have a parallel in online space at all.

Writing solidifies, chat dissolves. Substantial decisions start and end with an exchange of complete thoughts, not one-line-at-a-time jousts. If it’s important, critical, or fundamental, write it up, don’t chat it down.

This one feels like it should be Somebody’s Law:

If your words can be perceived in different ways, they’ll be understood in the way which does the most harm.

This is an interesting looking proposal for CSS grid to be ever so slightly extended to enable Masonry-style auto placement—something’s that tantalisingly close right now, but still requires some JavaScript to do calculations.

This is quite remarkable. On the surface, it’s a short article about the Y2K bug, but the hypertextual footnotes go deeper and deeper into memory, loss, grief …I’m very moved by the rawness and honesty nested within.

I’ve been thinking about some of the default behaviours that are built into web browsers.

First off, there’s the decision that a browser makes if you enter a web address without a protocol. Let’s say you type in example.com without specifying whether you’re looking for http://example.com or https://example.com.

Browsers default to HTTP rather than HTTPS. Given that HTTP is older than HTTPS that makes sense. But given that there’s been such a push for TLS on the web, and the huge increase in sites served over HTTPS, I wonder if it’s time to reconsider that default?

Most websites that are served over HTTPS have an automatic redirect from HTTP to HTTPS (enforced with HSTS). There’s an ever so slight performance hit from that, at least for the very first visit. If, when no protocol is specified, browsers were to attempt to reach the HTTPS port first, we’d get a little bit of a speed improvement.

But would that break any existing behaviour? I don’t know. I guess there would be a bit of a performance hit in the other direction. That is, the browser would try HTTPS first, and when that doesn’t exist, go for HTTP. Sites served only over HTTP would suffer that little bit of lag.

Whatever the default behaviour, some sites are going to pay that performance penalty. Right now it’s being paid by sites that are served over HTTPS.

I thought I might be able to get away with omitting meta name="viewport". Apparently not! Maybe someday.

This all goes back to the default behaviour of Mobile Safari when the iPhone was first released. Most sites wouldn’t display correctly if one pixel were treated as one pixel. That’s because most sites were built with the assumption that they would be viewed on monitors rather than phones. Only weirdos like me were building sites without that assumption.

So the default behaviour in Mobile Safari is assume a page width of 1024 pixels, and then shrink that down to fit on the screen …unless the developer over-rides that behaviour with a viewportmeta tag. That default behaviour was adopted by other mobile browsers. I think it’s a universal default.

But the web has changed since the iPhone was released in 2007. Responsive design has swept the web. What would happen if mobile browsers were to assume width=device-width?

The viewportmeta element always felt like a (proprietary) band-aid rather than a long-term solution—for one thing, it’s the kind of presentational information that belongs in CSS rather than HTML. It would be nice if we could bid it farewell.

While being driven around England it struck me that humans are currently like the filling in a sandwich between one slice of machine — the satnav — and another — the car. Before the invention of sandwiches the vehicle was simply a slice of machine with a human topping. But now it’s a sandwich, and the two machine slices are slowly squeezing out the human filling and will eventually be stuck directly together with nothing but a thin layer of API butter. Then the human will be a superfluous thing, perhaps a little gherkin on the side of the plate.

A look at the trend towards larger and larger font sizes for body copy on the web, culminating with Resilient Web Design.

There are some good arguments here for the upper limit on the font size there being too high, so I’ve adjusted it slightly. Now on large screens, the body copy on Resilient Web Design is 32px (2 times 1em), down from 40px (2.5 times 1em).

Most experienced designers want concision—clear, robust, consistent, elegant systems that avoid redundancy. Concise designs are smoother to implement, faster to render, quicker to understand, and easier to hand-off and maintain. Achieving a simplicity with clarity means that you’re engaging with the fundamentals of the problem (and of your craft) at the correct fidelity. You’ve cut through complexity with insight, understanding, and committed decision-making. That third one is critical. A lot of complexity comes from an unwillingness to commit to the things that insight and understanding surface.

If you haven’t seen The Rise Of Skywalker, avert your gaze for I shall be revealing spoilers here…

I wrote about what I thought of The Force Awakens. I wrote about what I thought of The Last Jedi. It was inevitable that I was also going to write about what I think of The Rise Of Skywalker. If nothing else, I really enjoy going back and reading those older posts and reminding myself of my feelings at the time.

I went to a midnight screening with Jessica after we had both spent the evening playing Irish music at our local session. I was asking a lot of my bladder.

I have to admit that my first reaction was …ambivalent. I didn’t hate it but I didn’t love it either.

Maybe I just find it hard to really get into the flow when I’m seeing a new Star Wars film for the very first time.

This time there were very specific things that I could point to and say “I don’t like it!” For a start, there’s the return of Palpatine.

I think the Emperor has always been one of the dullest characters in Star Wars. Even in Return Of The Jedi, he just comes across as a paper-thin one-dimensional villain who’s evil just because he’s evil. That works great when he’s behind the scenes manipulating events, but it makes for dull on-screen shenanigans, in my opinion. The pantomime nature of Emperor Palpatine seems more Harry Potter than Star Wars to me.

When I heard the Emperor was returning, my expectations sank. To be fair though, I think it was a very good move not to make the return of Palpatine a surprise. I had months—ever since the release of the first teaser trailer—to come to terms with it. Putting it in the opening crawl and the first scene says, “Look, he’s back. Don’t ask how, just live with it.” That’s fair enough.

So in the end, the thing that I thought would bug me—the return of Palpatine—didn’t trouble me much. But what really bugged me was the unravelling of one of my favourite innovations in The Last Jedi regarding Rey’s provenance. I wrote at the time:

I had resigned myself to the inevitable reveal that would tie her heritage into an existing lineage. What an absolute joy, then, that The Force is finally returned into everyone’s hands!

What bothered me wasn’t so much that The Rise Of Skywalker undoes this, but that the undoing is so uneccessary. The plot would have worked just as well without the revelation that Rey is a Palpatine. If that revelation were crucial to the story, I would go with it, but it just felt like making A Big Reveal for the sake of making A Big Reveal. It felt …cheap.

I have to say, that’s how I responded to a lot of the kitchen sink elements in this film when I first saw it. It was trying really, really hard to please, and yet many of the decisions felt somewhat lazy to me. There were times when it felt like a checklist.

In a way, there was a checklist, or at least a brief. JJ Abrams has spoken about how this film needed to not just wrap up one trilogy, but all nine films. But did it though? I think I would’ve been happier if it had kept its scope within the bounds of these new sequels.

That’s been a recurring theme for me with all three of these films. I think they work best when they’re about the new characters. I’m totally invested in them. Leaning on nostalgia and the cultural memory of the previous films and their characters just isn’t needed. I would’ve been fine if Luke, Han, and Leia never showed up on screen in this trilogy—that’s how much I’m sold on Rey, Finn, and Poe.

But I get it. The brief here is to tie everything together. And as JJ Abrams has said, there was no way he was going to please everyone. But it’s strange that he would attempt to please the most toxic people clamouring for change. I’m talking about the racists and misogynists that were upset by The Last Jedi. The sidelining of Rose Tico in The Rise Of Skywalker sure reads a lot like a victory for them. Frankly, that’s the one aspect of this film that I’m always going to find disappointing.

Because it turns out that a lot of the other things that I was initially disappointed by evaporated upon second viewing.

Now, I totally get that a film needs to work for a first viewing. But if any category of film needs to stand up to repeat viewing, it’s a Star Wars film. In the case of The Rise Of Skywalker, I think that repeat viewing might have been prioritised. And I’m okay with that.

Take the ridiculously frenetic pace of the multiple maguffin-led plotlines. On first viewing, it felt rushed and messy. I got the feeling that the double-time pacing was there to brush over any inconsistencies that would reveal themselves if the film were to pause even for a minute to catch its breath.

But that wasn’t the case. On second viewing, things clicked together much more tightly. It felt much more like a well-oiled—if somewhat frenetic—machine rather than a cobbled-together Heath Robinson contraption that might collapse at any moment.

My personal experience of viewing the film for the second time was a lot of fun. I was with my friend Sammy, who is not yet a teenager. His enjoyment was infectious.

At the end, after we see Rey choose her new family name, Sammy said “I knew she was going to say Skywalker!”

“I guess that explains the title”, I said. “The Rise Of Skywalker.”

“Or”, said Sammy, “it could be talking about Ben Solo.”

I hadn’t thought of that.

When I first saw The Rise Of Skywalker, I was disappointed by all the ways it was walking back the audacious decisions made in The Last Jedi, particularly Rey’s parentage and the genetic component to The Force. But on second viewing, I noticed the ways that this film built on the previous one. Finn’s blossoming sensitivity keeps the democratisation of The Force on the table. And the mind-melding connection between Rey and Kylo Ren that started in The Last Jedi is crucial for the plot of The Rise Of Skywalker.

Once I was able to get over the decisions I didn’t agree with, I was able to judge the film on its own merits. And you know what? It’s really good!

On the technical level, it was always bound to be good, but I mean on an emotional level too. If I go with it, then I’m rewarded with a rollercoaster ride of emotions. There were moments when I welled up (they mostly involved Chewbacca: Chewie’s reaction to Leia’s death; Chewie getting the medal …the only moment that might have topped those was Han Solo’s “I know”).

So just in case there’s any doubt—given all the criticisms I’ve enumerated—let me clear: I like this film. I very much look forward to seeing it again (and again).

A friend’s review of “The Rise of Skywalker”, which also serves as a perfect summary of JJ Abrams’ career: “A very well-executed lack of creativity.”

I think I might substitute the word “personality” for “creativity”. However you feel about The Last Jedi, there’s no denying that it embodies the vision of one person:

I think the reason why The Last Jedi works so well is that Rian Johnson makes no concessions to my childhood, or anyone else’s. This is his film. Of all the millions of us who were transported by this universe as children, only he gets to put his story onto the screen and into the saga. There are two ways to react to this. You can quite correctly exclaim “That’s not how I would do it!”, or you can go with it …even if that means letting go of some deeply-held feelings about what could’ve, should’ve, would’ve happened if it were our story.

JJ Abrams, on the other hand, has done his utmost to please us. I admire that, but I feel it comes at a price. The storytelling isn’t safe exactly, but it’s far from personal.

The result is that The Rise Of Skywalker is supremely entertaining—especially on repeat viewing—and it has a big heart. I just wish it had more guts.

Then there were the usual benefits that come with speaking at international conferences like An Event Apart and Beyond Tellerrand. I got to visit interesting places, eat excellent food, and meet good people.

Not everything was rosy. There were some sad life events for friends and family. And of course the whole political situation here in the UK has been just awful in 2019.

So onwards to 2020. I need to remind myself that many things are going well in the world but it can be hard to keep that in mind. At a local—nay, parochial—level, there’s a good chance that 2020 will deliver a hard Brexit. I have no faith in the competence or motivations of the current government to do otherwise (I keep reminding myself that I don’t have to stay in this country if it falls apart). And at the global scale, our attempts to mitigate the climate crisis are proceeding too slowly.

That’s something I need to take more personal responsibility for in 2020: fewer plane journeys, more trains, and more carbon offsetting.

Ultimately, it’s a fairly arbitrary moment in time but I do like to pause for a moment and look back at the year that’s just been. For all its faults, I have happy memories. I’m healthy. I played lots of music. I ate well. I spent time with friends and family.

I look forward to more of that in the third decade of the 21st century.

He also makes the uncomfortable observation that design systems work is not just hard, it’s inherently demoralising and soul-crushing.

My hunch is this: folks can’t talk about real design systems problems because it will show their company as being dysfunctional and broken in some way. This looks bad for their company and hence looks bad for them. But hiding those mistakes and shortcomings by glossing over everything doesn’t just make it harder for us personally, it hinders progress within the field itself.

If a human civilization beyond Earth ever comes into being, this will be unprecedented in any historical context we might care to invoke—unprecedented in recorded history, unprecedented in human history, unprecedented in terrestrial history, and so on. There have been many human civilizations, but all of these civilizations have arisen and developed on the surface of Earth, so that a civilization that arises or develops away from the surface of Earth would be unprecedented and in this sense absolutely novel even if the institutional structure of a spacefaring civilization were the same as the institutional structure of every civilization that has existed on Earth. For this civilizational novelty, some human novelty is a prerequisite, and this human novelty will be expressed in the mythology that motivates and sustains a spacefaring civilization.

A deep dive into deep time:

Record-keeping technologies introduce an asymmetry into history. First language, then written language, then printed books, and so and so forth. Should human history extend as far into the deep future as it now extends into the deep past, the documentary evidence of past beliefs will be a daunting archive, but in an archive so vast there would be a superfluity of resources to trace the development of human mythologies in a way that we cannot now trace them in our past. We are today creating that archive by inventing the technologies that allow us to preserve an ever-greater proportion of our activities in a way that can be transmitted to our posterity.