A lot of my writing in 2018 was on technical topics—front-end development, service workers, and so on—but I should really make more of an effort to write about a wider range of topics. I always like when Zeldman writes about his glamourous life. Maybe in 2019 I’ll spend more time letting you know what I had for lunch.

I really enjoy writing words on this website. If I go too long between blog posts, I start to feel antsy. The only relief is to move my fingers up and down on the keyboard and publish something. Sounds like a bit of an addiction, doesn’t it? Well, as habits go, this is probably one of my healthier ones.

Thanks for reading my words in 2018. I didn’t write them for you—I wrote them for me—but it’s always nice when they resonate with others. I’ll keep on writing my brains out in 2019.

But on the second day, Sebastiaan spent a fair bit of time investigating a more complex use of service workers with the Push API.

The Push API is what makes push notifications possible on the web. There are a lot of moving parts—browser, server, service worker—and, frankly, it’s way over my head. But I’m familiar with the general gist of how it works. Here’s a typical flow:

A website prompts the user for permission to send push notifications.

The user grants permission.

A whole lot of complicated stuff happens behinds the scenes.

Next time the website publishes something relevant, it fires a push message containing the details of the new URL.

The user’s service worker receives the push message (even if the site isn’t open).

The service worker creates a notification linking to the URL, interrupting the user, and generally adding to the weight of information overload.

Here’s what Sebastiaan wanted to investigate: what if that last step weren’t so intrusive? Here’s the alternate flow he wanted to test:

A website prompts the user for permission to send push notifications.

The user grants permission.

A whole lot of complicated stuff happens behinds the scenes.

Next time the website publishes something relevant, it fires a push message containing the details of the new URL.

The user’s service worker receives the push message (even if the site isn’t open).

The service worker fetches the contents of the URL provided in the push message and caches the page. Silently.

It worked.

I think this could be a real game-changer. I don’t know about you, but I’m very, very wary of granting websites the ability to send me push notifications. In fact, I don’t think I’ve ever given a website permission to interrupt me with push notifications.

You’ve seen the annoying permission dialogues, right?

In Firefox, it looks like this:

Will you allow name-of-website to send notifications?

[Not Now] [Allow Notifications]

In Chrome, it’s:

name-of-website wants to

Show notifications

[Block] [Allow]

But in actual fact, these dialogues are asking for permission to do two things:

Receive messages pushed from the server.

Display notifications based on those messages.

There’s no way to ask for permission just to do the first part. That’s a shame. While I’m very unwilling to grant permission to be interrupted by intrusive notifications, I’d be more than willing to grant permission to allow a website to silently cache timely content in the background. It would be a more calm technology.

Think of the use cases:

I grant push permission to a magazine. When the magazine publishes a new article, it’s cached on my device.

I grant push permission to a podcast. Whenever a new episode is published, it’s cached on my device.

I grant push permission to a blog. When there’s a new blog post, it’s cached on my device.

Then when I’m on a plane, or in the subway, or in any other situation without a network connection, I could still visit these websites and get content that’s fresh to me. It’s kind of like background sync in reverse.

There’s plenty of opportunity for abuse—the cache could get filled with content. But websites can already do that, and they don’t need to be granted any permissions to do so; just by visiting a website, it can add multiple files to a cache.

So it seems that the reason for the permissions dialogue is all about displaying notifications …not so much about receiving push messages from the server.

I wish there were a way to implement this background-caching pattern without requiring the user to grant permission to a dialogue that contains the word “notification.”

In the meantime, the proposal for periodic synchronisation (using background sync) could achieve similar results, but in a less elegant way; periodically polling for new content instead of receiving a push message when new content is published. Also, it requires permission. But at least in this case, the permission dialogue should be more specific, and wouldn’t include the word “notification” anywhere.

By the time I got back to Brighton, my brain was full …just in time for FF Conf.

All of the events were very different, but equally enjoyable. It was also quite nice to just attend events without speaking at them.

Indie Web Camp Berlin was terrific. There was an excellent turnout, and once again, I found that the format was just right: a day of discussions (BarCamp style) followed by a day of doing (coding, designing, hacking). I got very inspired on the first day, so I was raring to go on the second.

What I like to do on the second day is try to complete two tasks; one that’s fairly straightforward, and one that’s a bit tougher. That way, when it comes time to demo at the end of the day, even if I haven’t managed to complete the tougher one, I’ll still be able to demo the simpler one.

In this case, the tougher one was also tricky to demo. It involved a lot of invisible behind-the-scenes plumbing. I was tweaking my webmention endpoint (stop sniggering—tweaking your endpoint is no laughing matter).

Up until now, I could handle straightforward webmentions, and I could handle updates (if I receive more than one webmention from the same link, I check it each time). But I needed to also handle deletions.

The spec is quite clear on this. A 404 isn’t enough to trigger a deletion—that might be a temporary state. But a status of 410 Gone indicates that a resource was once here but has since been deliberately removed. In that situation, any stored webmentions for that link should also be removed.

Anyway, I think I got it working, but it’s tricky to test and even trickier to demo. “Not to worry”, I thought, “I’ve always got my simpler task.”

For that, I chose to add a little map to my homepage showing the last location I published something from. I’ve been geotagging all my content for years (journal entries, notes, links, articles), but not really doing anything with that data. This is a first step to doing something interesting with many years of location data.

I’ve got it working now, but the demo gods really weren’t with me at Indie Web Camp. Both of my demos failed. The webmention demo failed quite embarrassingly.

As well as handling deletions, I also wanted to handle updates where a URL that once linked to a post of mine no longer does. Just to be clear, the URL still exists—it’s not 404 or 410—but it has been updated to remove the original link back to one of my posts. I know this sounds like another very theoretical situation, but I’ve actually got an example of it on my very first webmention test post from five years ago. Believe it or not, there’s an escort agency in Nottingham that’s using webmention as a vector for spam. They post something that does link to my test post, send a webmention, and then remove the link to my test post. I almost admire their dedication.

Still, I wanted to foil this particular situation so I thought I had updated my code to handle it. Alas, when it came time to demo this, I was using someone else’s computer, and in my attempt to right-click and copy the URL of the spam link …I accidentally triggered it. In front of a room full of people. It was midly NSFW, but more worryingly, a potential Code Of Conduct violation. I’m very sorry about that.

Apart from the humiliating demo, I thoroughly enjoyed Indie Web Camp, and I’m going to keep adjusting my webmention endpoint. There was a terrific discussion around the ethical implications of storing webmentions, led by Sebastian, based on his epic post from earlier this year.

We established early in the discussion that we weren’t going to try to solve legal questions—like GDPR “compliance”, which varies depending on which lawyer you talk to—but rather try to figure out what the right thing to do is.

By receiving a webention in the first place, I was inferring a willingness for the link to be made public. That’s not necessarily true, as someone pointed out: a CMS could be automatically sending webmentions, which the author might be unaware of.

If the linking post is marked up in h-entry, I was inferring a willingness for the content to be republished. Again, not necessarily true.

That second inferrence of mine—that publishing in a particular format somehow grants permissions—actually has an interesting precedent: Google AMP. Simply by including the Google AMP script on a web page, you are implicitly giving Google permission to store a complete copy of that page and serve it from their servers instead of sending people to your site. No terms and conditions. No checkbox ticked. No “I agree” button pressed.

Just sayin’.

Anyway, when it comes to my own processing of webmentions, I’m going to take some of the suggestions from the discussion on board. There are certain signals I could be looking for in the linking post:

Does it include a link to a licence?

Is there a restrictive robots.txt file?

Are there meta declarations that say noindex?

Each one of these could help to infer whether or not I should be publishing a webmention or not. I quickly realised that what we’re talking about here is an algorithm.

Despite its current usage to mean “magic”, an algorithm is a recipe. It’s a series of steps that contribute to a decision point. The problem is that, in the case of silos like Facebook or Instagram, the algorithms are secret (which probably contributes to their aura of magical thinking). If I’m going to write an algorithm that handles other people’s information, I don’t want to make that mistake. Whatever steps I end up codifying in my webmention endpoint, I’ll be sure to document them publicly.

Anyway, here we are, on my blog, or in your RSS reader. I think I’ll do weaknotes. Some collections of notes. Sometimes. Not very well written probably. Generally written with the urgency of someone who is waiting for a baby wake up.

Bottom line; please place any idea worth more than 280 characters and the value Twitter places on them (which is zero) on a blog that you own and/or can easily take your important/valuable/life-changing ideas with you and make them easy for others to read and share.

What you write might help someone understand a concept that you may think has been covered enough before. We each have our own unique perspectives and writing styles. One writing style might be more approachable to some, and can therefore help and benefit a large (or even small) number of people in ways you might not expect.

Just write.

Even if only one person learns something from your article, you’ll feel great, and that you’ve contributed — even if just a little bit — to this amazing community that we’re all constantly learning from. And if no one reads your article, then that’s also okay. That voice telling you that people are just sitting somewhere watching our every step and judging us based on the popularity of our writing is a big fat pathetic attention-needing liar.

The web can be used to find common connections with folks you find interesting, and who don’t make you feel like so much of a weirdo. It’d be nice to be able to do this in a safe space that is not being surveilled.

Owning your own content, and publishing to a space you own can break through some of these barriers. Sharing your own weird scraps on your own site makes you easier to find by like-minded folks.

At times I think “will anyone reads this, does anyone care?”, but I always publish it anyway — and that’s for two reasons. First it’s a place for me to find stuff I may have forgotten how to do. Secondly, whilst some of this stuff is seemingly super-niche, if one person finds it helpful out there on the web, then that’s good enough for me. After all I’ve lost count of how many times I’ve read similar posts that have helped me out.

My advice after learning from so many helpful people this weekend is this: if you’re thinking of writing something that explains a weird thing you struggled with on the Internet, do it! Don’t worry about the views and likes and Internet hugs. If you’ve struggled with figuring out this thing then be sure to jot it down, even if it’s unedited and it uses too many commas and you don’t like the tone of it.

Maybe you feel more comfortable writing in short, concise bullets than at protracted, grandiose length. Or maybe you feel more at ease with sarcasm and dry wit than with sober, exhaustive argumentation. Or perhaps you prefer to knock out a solitary first draft and never look back rather than polishing and tweaking endlessly. Whatever the approach, if you can do the work to find a genuine passion for writing, what a powerful tool you’ll have.

I want to finally begin writing about psychology. A friend of mine shared his opinion that writing about this is probably best left to experts. I tried to tell him I think that people should write about whatever they want. He argued that whatever he could write about psychology has probably already been written about a thousand times. I told him that I’m going to be writer number 1001, and I’m going to write something great that nobody has written before.

A little while back, I switched from using Chrome as my day-to-day browser to using Firefox. I could feel myself getting a bit too comfortable with one particular browser, and that’s not good. I reckon it’s good to shake things up a little every now and then. Besides, there really isn’t that much difference once you’ve transferred over bookmarks and cookies.

Unfortunately I’m being bitten by this little bug in Firefox. It causes some of my bookmarklets to fail on certain sites with strict Content Security Policies (and CSPs shouldn’t affect bookmarklets). I might have to switch back to Chrome because of this.

I use bookmarklets throughout the day. There’s the Huffduffer bookmarklet, of course, for whenever I come across a podcast episode or other piece of audio that I want to listen to later. But there’s also my own home-rolled bookmarklet for posting links to my site. It doesn’t do anything clever—it grabs the title and URL of the currently open page and pre-populates a form in a new window, leaving me to add a short description and some tags.

Should you wish to keep track of everything I’m linking to, there’s a twitterbot you can follow called @adactioLinks. It uses a simple IFTTT recipe to poll my RSS feed of links and send out a tweet whenever there’s a new entry.

Or you can drink straight from the source and subscribe to the RSS feed itself, if you’re still rocking it old-school. But if RSS is your bag, then you might appreciate a way to filter those links…

All my links are tagged. Heavily. This is because all my links are “notes to future self”, and all my future self has to do is ask “what would past me have tagged that link with?” when I’m trying to find something I previously linked to. I end up using my site’s URLs as an interface:

I hope there’s something in there that you like. It always a nice bonus when other people like something I’ve written, but I write for myself first and foremost. Writing is how I figure out what I think. I will, of course, continue to write and publish on my website in 2018. I’d really like it if you did the same.

I was idly thinking about the different ways I can post to adactio.com. I decided to count the ways.

Admin interface

This is the classic CMS approach. In my case the CMS is a crufty hand-rolled affair using PHP and MySQL that I wrote years ago. I log in to an admin interface and fill in a form, putting the text of my posts into a textarea. In truth, I usually write in a desktop text editor first, and then paste that into the textarea. That’s what I’m doing now—copying and pasting Markdown from the Typed app.

Directly from my site

If I’m logged in, I get a stripped down posting interface in the notes section of my site.

Bookmarklet

This is how I post links. When I’m at a URL I want to bookmark, I hit the “Bookmark it” bookmarklet in my browser’s bookmarks bar. That pops open a version of the admin interface tailored specifically for links. I really, really like bookmarklets. The one big downside is that they don’t work on mobile.

Instagram

Thanks to Aaron’s OwnYourGram service—and the fact that my site has a micropub endpoint—I can post images from Instagram to my site. This used to happen instantaneously but Instagram changed their API rules for the worse. Between that and their shitty “algorithmic” timeline, I find myself using the service less and less. At this point I’m only on their for the doggos.

Swarm

OwnYourGram and OwnYourSwarm are very similar and could probably be abstracted into a generic service for posting from third-party apps to micropub endpoints. I’d quite like to post my check-ins on Untappd to my site.

Other people’s admin interfaces

Thanks to rel="me" and IndieAuth, I can log into other people’s posting interfaces using my own website as the log-in, and post to my micropub endpoint, like this. Quill is a good example of this. I don’t use it that much, but I really should—the editor interface is quite Medium-like in its design.

Anyway, those are the different ways I can update my website that I can think of right now.

Syndication

In terms of output, I’ve got a few different ways of syndicating what I post here:

I syndicate just about everything to my Facebook account using If This, Then That recipes (RSS to Facebook posts). Facebook is a roach motel. I never post any original content there—everything starts here on my site.

Just so you know, if you comment on one of my posts on Facebook, I probably won’t see it. But if you reply to a copy of one of posts on Twitter or Instagram, it will show up over here on adactio.com thanks to the magic of Brid.gy and webmention.

One of the topics I enjoy discussing at Indie Web Camps is how we can use design to display activity over time on personal websites. That’s how I ended up with sparklines on my site—it was the a direct result of a discussion at Indie Web Camp Nuremberg a year ago:

During the discussion at Indie Web Camp, we started looking at how silos design their profile pages to see what we could learn from them. Looking at my Twitter profile, my Instagram profile, my Untappd profile, or just about any other profile, it’s a mixture of bio and stream, with the addition of stats showing activity on the site—signs of life.

Perhaps the most interesting visual example of my activity over time is on my Github profile. Halfway down the page there’s a calendar heatmap that uses colour to indicate the amount of activity. What I find interesting is that it’s using two axes of time over a year: days of the month across the X axis and days of the week down the Y axis.

I wanted to try something similar, but showing activity by time of day down the Y axis. A month of activity feels like the right range to display, so I set about adding a calendar heatmap to monthly archives. I already had the data I needed—timestamps of posts. That’s what I was already using to display sparklines. I wrote some code to loop over those timestamps and organise them by day and by hour. Then I spit out a table with days for the columns and clumps of hours for the rows.

I’m using colour (well, different shades of grey) to indicate the relative amounts of activity, but I decided to use size as well. So it’s also a bubble chart.

It doesn’t work very elegantly on small screens: the table is clipped horizontally and can be swiped left and right. Ideally the visualisation itself would change to accommodate smaller screens.

Just as with Indie Web Camp Düsseldorf the weekend before, it was a fun two days—one day of discussions, followed by one day of making.

I spent most of the second day playing around with a new service that Aaron created called OwnYourSwarm. It’s very similar to his other service, OwnYourGram. Whereas OwnYourGram is all about posting pictures from Instagram to your own site, OwnYourSwarm is all about posting Swarm check-ins to your own site.

Usually I prefer to publish on my own site and then push copies out to other services like Twitter, Flickr, etc. (POSSE—Publish on Own Site, Syndicate Elsewhere). In the case of Instagram, that’s impossible because of their ludicrously restrictive API, so I have go the other way around (PESOS—Publish Elsewhere, Syndicate to Own Site). When it comes to check-ins, I could do it from my own site, but I’d have to create my own databases of places to check into. I don’t fancy that much (yet) so I’m using OwnYourSwarm to PESOS check-ins.

The great thing about OwnYourSwarm is that I didn’t have to do anything. I already had the building blocks in place.

First of all, I needed some way to authenticate as my website. IndieAuth takes care of all that. All I needed was rel="me" attributes pointing from my website to my profiles on Twitter, Flickr, Github, or any other services that provide OAuth. Then I can piggyback on their authentication flow (this is also how you sign in to the Indie Web wiki).

Anyway, I already had IndieAuth and micropub set up on my site, so all I had to do was log in to OwnYourSwarm and I immediately started to get check-ins posted to my own site. They show up the same as any other note, so I decided to spend my time at Indie Web Camp Nuremberg making them look a bit different. I used Mapbox’s static map API to show an image of the location of the check-in. What’s really nice is that if I post a photo on Swarm, that gets posted to my own site too. I had fun playing around with the display of photo+map on my home page stream. I’ve made a page for keeping track of check-ins too.

I’ve just come back from a ten-day trip to Germany. The trip kicked off with Indie Web Camp Düsseldorf over the course of a weekend.

Once again the wonderful people at Sipgate hosted us in their beautiful building, and once again myself and Aaron helped facilitate the two days.

Saturday was the BarCamp-like discussion day. Plenty of interesting topics were covered. I led a session on service workers, and that’s also what I decided to work on for the second day—that’s when the talking is done and we get down to making.

I’ve already got a separate cache for pages that gets added to as the user browses around my site. I needed to figure out a way to store the metadata for those pages so that I could then display it on the offline page. I came up with a workable solution, and interestingly, it involved no changes to the service worker script at all.

When you visit any blog post, I put metadata about the page into localStorage (after first checking that there’s an active service worker):

Meanwhile in my service worker, when you visit that same page, it gets added to a cache called “pages”. Both localStorage and the cache API are using URLs as keys. I take advantage of that on my offline page.

The nice thing about writing JavaScript on my offline page is that I know the page will only be seen by modern browsers that support service workers, so I can use all sorts of fancy from ES6, or whatever we’re calling it now.

I start by looping through the keys of the “pages” cache (that’s right—the cache API isn’t just for service workers; you can access it from any script). Then I check to see if there is a corresponding localStorage key with the same string (a URL). If there is, I pull the metadata out of local storage and add it to an array called browsingHistory:

All those steps need to be wrapped inside the then clause attached to caches.open("pages") because the cache API is asynchronous.

There you have it. Now if you’re browsing adactio.com and your network connection drops (or my server goes offline), you can choose from a list of pages you’ve previously visited.

The current situation isn’t ideal though. I’ve got a clean-up operation in my service worker to limit the number of items stored in my “pages” cache. The cache never gets bigger than 35 items. But there’s no corresponding clean-up of metadata stored in localStorage. So there could be a lot more bits of metadata in local storage than there are pages in the cache. It’s not harmful, but it’s a bit wasteful.

I can’t do a clean-up of localStorage from my service worker because service workers can’t access localStorage. There’s a very good reason for that: the localStorage API is synchronous, and everything that happens in a service worker needs to be asynchronous.

Service workers can access indexedDB: it’s asynchronous. I could use indexedDB instead of localStorage, but I’m not a masochist. My best bet would be to use the localForage library, which wraps indexedDB in the simple syntax of localStorage.

Jeffrey and Eric never stopped writing on their own sites. Sure, there’s good stuff on their about web design and development, but it’s the writing about their non-web lives that’s so powerful.

There are more people I could mention …but, to be honest, not that many more. Seems like most people are happy to only publish on Ev’s blog or not at all.

I know not everybody wants to write on the web, and that’s fine. But it makes me sad when people choose not to publish their thoughts because they think no-one will be interested, or that it’s all been said before. I understand where those worries come from, but I believe—no, I know—that they are unfounded.

It’s a world wide web out there. There’s plenty of room for everyone. And I, for one, love reading the words of others.

When I was marking up Resilient Web Design I wanted to make sure that people could link to individual sections within a chapter. So I added IDs to all the headings. There’s no UI to expose that though—like the hover pattern that some sites use to show that something is linkable—so unless you know the IDs are there, there’s no way of getting at them other than “view source.”

I’m using the fragmention support to power the index of the book. It relies on JavaScript to work though, so Matthew has come to the rescue again and created a version of the site with IDs for each item linked from the index (I must get around to merging that).

The fragmention functionality is ticking along nicely with one problem…

It was great—travelling through the land of Steinbeck and Guthrie at the speed of Kerouac and Springsteen. We stopped for the night at Pismo Beach and then continued on, rolling into Santa Monica at sunset.

The weekend was spent in the usual Indie Web Camp fashion: a day of BarCamp-style discussions, followed by a day of hacking on our personal websites.

I did face a bit of a conundrum. Both my home page stream and my tag pages show posts in reverse chronological order, with the newest posts at the top. I’ve decided to replicate that for the archive view, but I’m not sure if that’s the right decision. Maybe the list of years should begin with 2001 and end with 2016, instead of the other way around. And maybe when you’re looking at a month of posts, you should see the first posts in that month at the top.

Anyway, I’ll live with it in reverse chronological order for a while and see how it feels. I’m just glad I managed to get it down—I’ve been meaning to do it for quite a while. Once again, I’m amazed by how much gets accomplished when you’re in the same physical space as other helpful, motivated people all working on improving their indie web presence, little by little.

That’s the tweetiest of tweets, isn’t it? (and just look at the status ID—only five digits!)

Of course, back then we didn’t call them tweets. We didn’t know what to call them. We didn’t know what to make of this thing at all.

I say “we”, but when I signed up, there weren’t that many people on Twitter that I knew. Because of that, I didn’t treat it as a chat or communication tool. It was more like speaking into the void, like blogging is now. The word “microblogging” was one of the terms floating around, grasped by those of trying to get to grips with what this odd little service was all about.

Twenty days after I started posting to Twitter, I wrote about how more and more people that I knew were joining :

The usage of Twitter is, um, let’s call it… emergent. Whenever I tell anyone about it, their first question is “what’s it for?”

Fair question. But their isn’t really an answer. You send messages either from the website, your mobile phone, or chat. What you post and why you’d want to do it is entirely up to you.

I was quite the cheerleader for Twitter:

Overall, Twitter is full of trivial little messages that sometimes merge into a coherent conversation before disintegrating again. I like it. Instant messaging is too intrusive. Email takes too much effort. Twittering feels just right for the little things: where I am, what I’m doing, what I’m thinking.

The characters in your username counted towards your 140 characters. That’s why Tantek changed his handle to be simply “t”. I tried it for a day. I think I changed my handle to “jk”. But it was too confusing so I changed it back.

We weren’t always sure how to write our updates either—your username would appear at the start of the message, so lots of us wrote our updates in the third person present (Brian still does). I’m partial to using the present continuous. That was how I wrote my reaction to Chris’s weird idea for tagging updates.

I think about that whenever I see a hashtag on a billboard or a poster or a TV screen …which is pretty much every day.

At some point, Twitter updated their onboarding process to include suggestions of people to follow, subdivided into different categories. I ended up in the list of designers to follow. Anil Dash wrote about the results of being listed and it reflects my experience too. I got a lot of followers—it’s up to around 160,000 now—but I’m pretty sure most of them are bots.

There have been a lot of changes to Twitter over the years. In the early days, those changes were driven by how people used the service. That’s where the @-reply convention (and hashtags) came from.

Then something changed. The most obvious sign of change was the way that Twitter started treating third-party developers. Where they previously used to encourage and even promote third-party apps, the company began to crack down on anything that didn’t originate from Twitter itself. That change reflected the results of an internal struggle between the people at Twitter who wanted it to become an open protocol (like email), and those who wanted it to become a media company (like Yahoo). The media camp won.

Of course Twitter couldn’t possibly stay the same given its incredible growth (and I really mean incredible—when it started to appear in the mainstream, in films and on TV, it felt so weird: this funny little service that nerds were using was getting popular with everyone). Change isn’t necessarily bad, it’s just different. Your favourite band changed when they got bigger. South by Southwest changed when it got bigger—it’s not worse now, it’s just very different.

Christopher Alexander made a great diagram, a spectrum of privacy: street to sidewalk to porch to living room to bedroom. I think for many of us Twitter started as the porch—our space, our friends, with the occasional neighborhood passer-by. As the service grew and we gained followers, we slid across the spectrum of privacy into the street.

It’s hard to put into words how good this feels. There’s a psychological comfort blanket that comes with owning your own data. I see my friends getting frustrated and angry as they put up with an increasingly alienating experience on Twitter, and I wish I could explain how much better it feels to treat Twitter as nothing more than a syndication service.

When Twitter rolls out changes these days, they certainly don’t feel like they’re driven by user behaviour. Quite the opposite. I’m currently in the bucket of users being treated to new @-reply behaviour. Tressie McMillan Cottom has written about just how terrible the new changes are. You don’t get to see any usernames when you’re writing a reply, so you don’t know exactly how many people are going to be included. And if you mention a URL, the username associated with that website may get added to the tweet. The end result is that you write something, you publish it, and then you think “that’s not what I wrote.” It feels wrong. It robs you of agency. Twitter have made lots of changes over the years, but this feels like the first time that they’re going to actively edit what you write, without your permission.

Maybe this is the final straw. Maybe this is the change that will result in long-time Twitter users abandoning the service. Maybe.

Me? Well, Twitter could disappear tomorrow and I wouldn’t mind that much. I’d miss seeing updates from friends who don’t have their own websites, but I’d carry on posting my short notes here on adactio.com. When I started posting to Twitter ten years ago, I was speaking (or microblogging) into the void. I’m still doing that ten years on, but under my terms. It feels good.

I’m not sure if my Twitter account will still exist ten years from now. But I’m pretty certain that my website will still be around.

My site has been behaving strangely recently. It was nothing that I could put my finger on—it just seemed to be acting oddly. When I checked to see if everything was okay, I was told that everything was fine, but still, I sensed something that was amiss.

I’ve just realised what it was. Last week on the 30th of September, I didn’t do or say anything special. That was the problem. I had forgotten my blog’s anniversary.

I’m so sorry, adactio.com! Honestly, I had been thinking about it for all of September but then on the day, one thing led to another, I was busy, and it just completely slipped my mind.

It has been a very rewarding, often cathartic experience so far. I know that blogging has become somewhat passé in this age of Twitter and Facebook but I plan to keep on keeping on right here in my own little corner of the web.

I should plan something special for September 30th, 2021 …just to make sure I don’t forget.

There was a design session looking at alternatives to simply presenting everything in a stream. Some great ideas came out of that. And there was a session all about bookmarking and linking. That one really got my brain whirring with ideas for the second day—the making/coding day.

I’ve learned from previous Indie Web Camps that a good strategy for the second day is to have two tasks to tackle: one that’s really easy (so you’ve at least got that to demo at the end), and one that’s more ambitious. This time, I put together a list of potential goals, and then ordered them by difficulty. By the end of the day, I managed to get a few of them done.

I didn’t get around to adding pagination. That’s something I should definitely add, because some of those pages get veeeeery long. But I did spend some time adding sparklines. They can be quite revealing, especially on topics that were hot ten years ago, but have faded over time, or topics that have becoming more and more popular with each year.

I’m recovering from an illness that laid me low a few weeks back. I had a nasty bout of man-flu which then led to a chest infection for added coughing action. I’m much better now, but alas, this illness meant I had to cancel my trip to Chicago for An Event Apart. I felt very bad about that. Not only was I reneging on a commitment, but I also missed out on an opportunity to revisit a beautiful city. But it was for the best. If I had gone, I would have spent nine hours in an airborne metal tube breathing recycled air, and then stayed in a hotel room with that special kind of air conditioning that hotels have that always seem to give me the sniffles.

Anyway, no point regretting a trip that didn’t happen—time to look forward to my next trip. I’m about to embark on a little mini tour of some lovely European cities:

Tomorrow I travel to Stockholm for Nordic.js. I’ve never been to Stockholm. In fact I’ve only stepped foot in Sweden on a day trip to Malmö to hang out with Emil. I’m looking forward to exploring all that Stockholm has to offer.

On Saturday I’ll go straight from Stockholm to Berlin for the View Source event organised by Mozilla. Looks like I’ll be staying in the east, which isn’t a part of the city I’m familiar with. Should be fun.

Alas, I’ll have to miss out on the final day of View Source, but with good reason. I’ll be heading from Berlin to Bologna for the excellent From The Front conference. Ah, I remember being at the very first one five years ago! I’ve made it back every second year since—I don’t need much of an excuse to go to Bologna, one of my favourite places …mostly because of the food.

The only downside to leaving town for this whirlwind tour is that there won’t be a Brighton Homebrew Website Club tomorrow. I feel bad about that—I had to cancel the one two weeks ago because I was too sick for it.

But on the plus side, when I get back, it won’t be long until Indie Web Camp Brighton on Saturday, September 24th and Sunday, September 25th. If you haven’t been to an Indie Web Camp before, you should really come along—it’s for anyone who has their own website, or wants to have their own website. If you have been to an Indie Web Camp before, you don’t need me to convince you to come along; you already know how good it is.

The importance of owning your data is getting more awareness. To grow it and help people get started, we’re meeting for a bar-camp like collaboration in Brighton for two days of brainstorming, working, teaching, and helping.

If you haven’t been to an Indie Web Camp before, it’s a very straightforward proposition. The idea is that you should have your own website. That’s it. Every thing else is predicated on that. So while there’ll be plenty of discussions, demos, and designs, they’re all in service to that fundamental premise.

The first day of an Indie Web Camp is like a BarCamp. We make a schedule grid at the start of the day and people organise topics by room and time slot. It sounds chaotic. It is chaotic. But it works surprisingly well. The discussions can be about technologies, or interfaces, or ideas, or just about anything really.

The second day is for making. After the discussions from the previous day, most people will have a clear idea at this point for something they might want to do. It might involve adding some new technology to their website, or making some design changes, or helping build a tool. For people starting from scratch, this is the perfect time for them to build and launch a basic website.

At the end of the second day, everyone demos what they’ve done. I’m always amazed by how much people can accomplish in just one weekend. There’s something about having other people around to help you that makes it super productive.

You might be thinking “but I’m not a coder!” Don’t worry—there’ll be plenty of coders there so you can get their help on whatever you might decide to do. If you’re a designer, your skills will be in high demand by those coders. It’s that mish-mash of people that makes it such a fun gathering.

I’ve got a fairly simple posting interface for my notes. A small textarea, an optional file upload, some checkboxes for syndicating to Twitter and Flickr, and a submit button.

It works fine although sometimes the experience of uploading a file isn’t great, especially if I’m on a slow connection out and about. I’ve been meaning to add some kind of Ajax-y progress type thingy for the file upload, but never quite got around to it. To be honest, I thought it would be a pain.

But then, in his excellent State Of The Gap hit parade of web technologies, Remy included a simple file upload demo. Turns out that all the goodies that have been added to XMLHttpRequest have made this kind of thing pretty easy (and I’m guessing it’ll be easier still once we have fetch).