The History of the Webhttps://thehistoryoftheweb.com The best stories from the web's history Mon, 18 Mar 2019 11:54:00 +0000 en-US hourly 1 https://wordpress.org/?v=5.1.1https://thehistoryoftheweb.com/wp-content/uploads/2017/09/cropped-thotw-logo-square-1-32x32.jpgThe History of the Webhttps://thehistoryoftheweb.com 3232Special Announcement Edition: The Web Turns 30 and Time for a Big Changehttps://thehistoryoftheweb.com/postscript/the-web-turns-30/ Tue, 12 Mar 2019 17:06:47 +0000https://thehistoryoftheweb.com/?post_type=postscript&p=3092TL;DR I’m changing things up a bit. I’ll still be sharing history, but that history will look a bit different, and come to you little less frequently than every week. First one goes out next week. Also, a redesign! Haven’t signed up yet? Now’s your chance: Thirty years ago today, Tim Berners-Lee handed his employers at […]

]]>TL;DR I’m changing things up a bit. I’ll still be sharing history, but that history will look a bit different, and come to you little less frequently than every week. First one goes out next week. Also, a redesign!

Haven’t signed up yet? Now’s your chance:

Subscribe to The History of the Web

Enter your email address

Thirty years ago today, Tim Berners-Lee handed his employers at CERN a document entitled “Information Management, A Proposal,” an outline for a networked, hypertext information system for sharing and exchanging documents. It would later be called the World Wide Web. It’s a staggering milestone given the amount that has changed in that time. For some of us, the web has been around for our entire professional lives. Yet in many ways, we’ve barely even begun.

This proposal is also where my timeline begins. I sent my first newsletter out a little over two years ago, a smaller entry about the endless renaming of the Mozilla browser. Since then, I’ve expanded my research quite a bit, added hundreds of entries to an ongoing timeline, sharing what I could along the way in small, bite-sized posts that took a slice of that history and told its story. I even published an ebook. The most incredible thing to me is to be part of the web at a time of great reflection by its creators, builders, and users. As the web enters its thirtieth year, our perspective is shifting as we see the impact it can have on the world, in positive and negative ways. I am hopeful that a new web will be born from this perspective, one that considers personal privacy, openness and the other founding ideals of the World Wide Web.

So first of all, thank you. Thank you for reading along. Thank you to those who just signed up last week, and especially to those who signed up years ago and have been along for the ride ever since.

I have no plans on stopping.

It does, however, feel like a good time for a change. Thus far, the stories I’ve plucked from the web’s past have been more or less at random. This was a product of my research process, which has found me digging in wherever I could and sharing whatever it was that I found. Two years of research (and truth be told, this process has been going on even longer than that) has led me to a clearer picture and a deeper understanding of the web’s trajectory.

It’s also been a lot. Writing something new each week can, at times, be a daunting task. Staring down that deadline means sometimes I finish a post before I feel fully finished. The truth is, the web absolutely and utterly fascinates me. Endlessly. I want my writing to reflect that.

Here’s what I’m going to do. I’m going back to the beginning. Starting next week, I’ll be sharing a different kind of post. One that is longer, and spans a greater period of time than a single isolated moment. I will be sharing individual chapters from the web’s history, starting with its origin and moving through all of its multi-faceted transformations. I hope to ground this work in questions about where the web came from, what were its foundational ideals, who were the people that created it, and how did we get to where we are today.

I’ve written a few drafts in this new format, and I think their the best work I’ve done in a while. I’m excited to share them. But they are longer pieces, and they require more research, and more time to write.

So I won’t be publishing each week. In fact, I won’t have a regular publishing schedule. I expect that I’ll have a new one of these chapters done at least every couple of months, if not more frequently. But I’ll be publishing them as they’re ready.

Now, this isn’t the only kind of post I’ll be writing. I’ll also be filling in the gaps in between with the same stories you all know (and I hope love). Smaller entries with a tidbit or milestone plucked from the timeline, which I’ll still be adding too. There’s plenty more ideas for that and I want to continue to share those. But again, these won’t necessarily go out on a weekly or set schedule. Bottom line, I’m experimenting a bit.

My other big news: I’ve redesigned the site! I’ve gotten positive feedback on the last iteration, but the truth is, it was always meant to be a temporary design. The web’s grown up a bit, and there’s some really cool new CSS features I’ve been itching to try out. So I put them to work on the site, trying to maintain its simplicity while adding a bit of a unique style. You can even search the archives now, which is long overdue. If you have a minute, I’d love for you to pay it a visit. It’s a work in progress, so comments and feedback are welcome.

Over the next few months, I hope to share a some more experiments I’ve been trying out. A few weeks ago, I published my first ever guest post. It’s incredible to see a different perspective on history. It’s something I’d like to continue (incidentally, if you have a story to share, go ahead and respond to this post!). I have some more ideas in the works, so stay tuned for those.

For now, look to next week for the very first chapter in what I’m calling A Complete History. It’ll take a look at what makes the web the web.

Again, thank you to each and everyone that’s followed along. I’m excited to kick off this new phase.

]]>What Does AJAX Even Stand For?https://thehistoryoftheweb.com/what-does-ajax-even-stand-for/ https://thehistoryoftheweb.com/what-does-ajax-even-stand-for/#respondMon, 04 Mar 2019 12:51:36 +0000http://thehistoryoftheweb.com/?p=236The term AJAX may have not been coined until 2005, but it's origin stretches all the way back to the early 2000's, when browsers provided developers with the glue between clients and servers.

]]>In the summer of 2001, Paul Bucheit sent a website he was working on to his friends and colleagues. The site was an application where users could search through thousands of emails at a time and get back relevant results.

Bucheit had finished the software in a single day, so it was a bit rough around the edges. But there was really only a single request from his friends. It would be great if the software could search through their emails. See, in its first version, only Bucheit’s email was accessible to every user that visited the site. Bucheit made a note of that and got back to work.

Google released Gmail to the public a few years later. It had more than a few things going for it. Gmail came with 1GB of storage (most competitors offered 4MB) and effective search. At the beginning the only way to get access was through an invite, so it had an exclusive feel. And most impressively, Gmail’s interface was snappy and responsive. The search bar would autocomplete as you typed. Conversations were organized into expandable, easy to read threads. And every time you clicked on an email, the message would load almost instantly. There was no doubt about it. Gmail was on the web, but it felt like a desktop application.

When Google acquired Where2 Technologies in 2004, the Rasmussen brothers were close to their last legs. A last minute investment deal had gone south and the two were in trouble of losing their company. Google’s acquisition came just in time to save them. For the last several years, Lars and Jen Ramussen had been working on a map application with a fresh interface. The app was based one the concept of tiles. Each tile represented a small part of the map, and stitching them together allowed users to navigate the map just by scrubbing their mouse around.

The app was built for the desktop, but Google co-founder Larry Page was a big fan of the web. He pushed the newly absorbed team to recreate their application as a website. Unaware of what was going on with Gmail, the team went about recreating their technology in the browser. Not too long after that, in February of 2005, Google Maps went into beta.

Both Gmail and Google Maps shared more than just a parent company in common. Their interfaces were both dynamic and performant and on the very cutting edge. They felt like the future of the web. And they both made judicious use of JavaScript. This allowed them to connect to a server asynchronously, and pull down new data without loading a new page. They had Microsoft to thank for that particular feature.

At the time, there was some pretty interesting things going on at Microsoft. Starting with Internet Explorer 3, Microsoft had added their own implementation of the Java Virtual Machine to the Windows operating system, and IE specifically, for the purpose of running Java applets. But almost as soon as they did, Sun (the creator of Java) sued Microsoft for failing to comply fully with the Java standard. Microsoft, already embroiled in a much larger antitrust suit, decided that they would simply remove Java from their operating system, and that would be that.

There was just one problem. In order to get some of the more advanced features of their Outlook email web app to work, they had relied on some of the Java components of Internet Explorer. So to plug this gap, right before the release of Internet Explorer 5 in 1999, Microsoft added a new JavaScript function to the browser called XMLHttpRequest. This allowed the browser to make an HTTP request to the server and get back some data in XML behind the scenes.

What that means is that data could be refreshed on the page without a full reload. It served its purpose pretty well. But all that work was just to get the Outlook website working the way it was supposed to.

But that didn’t stop other browsers from adopting this new functionality. Mozilla soon followed suit with an implementation of their own in Firefox. The function was used here and there over the next couple of years. The most notable of the bunch was Oddpost, an email web application. Oddpost made a lot design decisions in their application that have stuck around in email applications, such as the three-pane window. As far back as 2002, they were using the XMLHttpRequest function pretty extensively in their application. However, the site was paid and therefore didn’t receive that much attention until it was bought by Yahoo in 2005.

It was this same function that brought Gmail and Google Maps to life. For both teams, HTML wasn’t going to cut it on its own. JavaScript would have to pull it all together.

Bucheit at Gmail, and the Rasmussen brothers at Google Maps were told that relying so heavily on JavaScript was a big mistake. There was certainly some truth to that. At the time, implementation across browsers was spotty and diverse. And unlike the forgiving nature of HTML, if you screwed up in JavaScript you could crash the browser.

Even so, they pulled it off. Users took notice. The ability to fly through thousands of emails or scroll through a map tile by tile, without a lag, was… Well it was just plain incomparable. There have been a few times when the way the world thinks about the web makes a big shift. This was one of them. Both services took off, and in no time, the techniques behind them did too.

Subsequently, Jesse James Garett of Adaptive Path finally gave this use of Javascript a name. He called it Asynchronous Javascript and XML (thankfully shortened to Ajax). Ajax wasn’t a single method or technology, but a group of principles based on the work being done at Google that described how to handle JavaScript in more demanding web applications. The short version is actually fairly simple. Use XMLHttpRequest to make request to the server, take the XML data returned, and lay it out on the page using JavaScript and semantic HTML.

Ajax took off after that. It underpins major parts of modern web development, and has spawned a number of frameworks and methodologies. The technology itself has been swapped out over time. XML was replaced by JSON, and XMLHttpRequest by fetch. That doesn’t matter though. Ajax showed a lot of developers that the web wasn’t just about documents. The web could be used to build applications. That might seem like a small notion, but trust me when I say, it was momentous.

]]>https://thehistoryoftheweb.com/what-does-ajax-even-stand-for/feed/0The World Wide Web Recap, February 2019https://thehistoryoftheweb.com/postscript/the-world-wide-web-recap-february-2019/ Mon, 25 Feb 2019 12:48:55 +0000https://thehistoryoftheweb.com/?post_type=postscript&p=3062Each month, I send out a list of links from my research or around the web. Here’s the very best links I found in February. Recreating the First Web Browser at CERN After rebuilding the first ever website back in 2013, the CERN hack team came back together this year for an even more ambitious […]

After rebuilding the first ever website back in 2013, the CERN hack team came back together this year for an even more ambitious project: recreating the WorldWideWeb browser, the first web browser ever built by Tim Berners-Lee to demonstrate the networked hypertext capabilities of the web, a browser I’ve written about before. Over the course of a week in Geneva, they managed to hack a new version of the browser that, incidentally, runs entirely inside of the browser! It’s a massive achievement and a fun experiment, made doubly so by revealing the original intent of the web as a two-way street. The WorldWideWeb browser is a read and write browser, which allowed users to interact and shape the web as they viewed it, a vision that has been reversed quite a bit in the last 30 years. Another great artifact of the week? Jeremy Kieth’s fascinating collider timeline of the web’s pre-history.

Avery Pennarun moves the privacy conversation beyond the sheer volume of data being harvested from each and every one of us to the next logical question, what do we do with all of that data? The answer, it seems, is not very much. As one engineer pointed out, “Everyone loves collecting data, but nobody loves analyzing it later.” It turns out (and as some of us likely already know) all that data being leaked out more and more each day may not even serve a useful purpose to those who collect it.

The title on this one buries the lead a bit. Developer Bridget Stewart takes a trip through her own personal history of the web to not just argue for simpler approaches to web development, but to restate the case for progressive enhancement in a modern context. Stewart reinforces the idea that the web is a forgiving medium and that code bloat of websites filled to the brim with Javascript is just another case of history repeating itself. It may just be time for a bit of a course correction (she also comes to the defense of the Cascade in CSS, a position I personally think is undervalued and dismissed far too frequently, but hey, that’s another story).

This one from Paul Robert Lloyd is from late last year, but it’s already made it’s way onto my favorites list. Lloyd contrasts the commoditization and homogeneity of design and website publishing with many web creators innate desire to build something unique and fundamentally of the web. Ultimately, this push and pull will always drive us forward, but it is the way in which we interact with this dichotomy that can help us define the web’s future.

Tania bills her site as the “missing instruction manuals of the web,” which is a concept I quite like. She spent her early career as a chef, but transitioned to a full-time web developer about 5 years ago and since then has been focused on writing all about web development. Her tutorials are clear and easy to follow, and the topics range from Git to React. As an added bonus, the site has absolutely no advertising or sponsored posts, though you can support her directly if you find the tutorials useful.

Came across this bonkers quote from 1995 while doing a bit of research last week. It misses the mark so completely that it actually almost circles around to a good point. Anyway, it’s a good reminder that none of us are very good at predicting the future:

Do our computer pundits lack all common sense? The truth in no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works.

]]>How to Use the Web To Show the Truthhttps://thehistoryoftheweb.com/how-to-use-the-web-to-show-the-truth/ https://thehistoryoftheweb.com/how-to-use-the-web-to-show-the-truth/#respondMon, 18 Feb 2019 12:42:35 +0000http://thehistoryoftheweb.com/?p=872Ushahidi is a platform that uses the democratizing power of the web to open access to citizen journalists on the ground and shine a light where the truth is hidden.

]]>When Ory Okolloh returned to her hometown in Kenya to vote in the 2007 presidential election, she arrived with hope for the future of her country. She was an activist and a blogger, and had run the site Kenyan Pundit since 2005. She had also helped build tools to increase government accountability, and worked with and for NGO’s in Africa. She had helped to make change in her country and many of her fellow citizens, especially from the younger generation, shared her pride.

Unfortunately, what took place after the election was a countrywide crisis. At the end of December 2007, presidential incumbent Mwai Kibaki was declared the winner of the election, but the election itself appeared to be rigged. Protests broke out across the nation, some of which turned to horrific violence and rioting.

Okolloh describes this time as a surreal one. While violent protests shook the country, media outlets largely ignored the crisis. Okolloh recalls cartoons playing on TV at the same time as “you look outside and you’re seeing smoke all over Kibera.” So Okolloh logged on to her blog and started reporting the only way she knew how, with the truth. Each day, she would post a dozen times (or more) to her blog, sourcing locations and reports from her followers and confirming them with her contacts in NGO’s. One of her biggest worries was that the international media was underreporting fatalities and displacements that were occurring in Kenya because they had no access to truthful accounts.

The Kenyan Pundit blog when Ushahidi was first announced

By the 2nd of January, Okolloh returned to South Africa and began thinking through how to solve this issue and give members of the press more accurate information. On her blog, she posted a request for any “techies” interested in creating a Google Maps mashup that would take eyewitness reports and lay them out on a map of Kenya. That’s when Erick Hersman reached out.

Hersman was the son of missionaries, and grew up in Kenya and the Sudan. He eventually set up shop in Orlando, Florida but kept his connection to Africa alive by blogging on his website WhiteAfrican. He had followed Okolloh for years, and when she posted about a Google Maps mashup he saw a powerful idea. Luckily, Hersman was also plugged into the tech scene. And he knew that to really get things moving they would need some additional help.

David Kobia was born in Kenya, but moved to Birmingham, Alabama to attend the University of Alabama. However, he soon discovered web development and dropped out of school to pursue the web as a career full time. Kobia had created the online forum Mashada back in 1998 so Kenyans could come together and discuss politics. But in the wake of the election, Mashada had become a hotbed of indignation and ire. Still exasperated from the election, and now fed up with what his forum had become, Kobia shut down Mashada and got into his car to join some friends for a weekend away. Driving up the interstate through Georgia, he got a call from Hersman. Hersman walked Kobia through the idea. He understood its potential right away, turned his car around, and sped home.

The final piece was to gather people that could help verify and confirm reports on the ground, and at the same time cull through the data and make sure there was a truthful narrative there. That’s where bloggers like Juliana Rotich (who would go on to become the platform’s Executive Director) came in.

In just a few days, this team created Ushahidi.

Ushahidi means “witness” in Swahili. And that’s who the website this team put together was meant to serve. The first version of the site was very simple. A form allowed users to report an incident, be it a violent protest or displaced community, noting the time and location and any details they wanted to include. These reports would then be overlaid on top of a digital map, creating an extremely accurate, real-time, crowd-sourced report of violence and displacement in Kenya. To make this happen, Kobia put together a lightweight application on top of Kohana, itself a a spin off of the popular PHP web framework CodeIgniter.

Every day, more and more reports came in. And Ushahidi grew with them. One of the crucial and early additions to the site was the ability to send reports in via email and SMS. For a lot of Kenyans, computers were not readily available. But most had mobile phones, and being able to send an SMS message made the site much more accessible. And as the conflict began to turn, the team added a “Peace Efforts” category to highlight some of this positive change.

In order to keep the barrier to entry as low as possible, no registration was required to submit a new report to Ushahidi. But absolute truth was quintessential to the project. So as the application progressed, it transformed into a central hub of records from various sources, which Okolloh, Rotich, and Ushahidi advocates could sort through and verify. And in a very short time what had started as a simple technology mashup had evolved into a powerful crowd-sourcing platform.

The site, in many ways, represented the truth laid bare. In a conflict that was difficult to account for and difficult to source, Ushahidi provided the media and the world with access to verified information about what was actually happening on the ground. Just the simple ability for people to identify their loved ones was extremely powerful. In other cases, displaced communities were able to find the help they needed thanks to the platform. But perhaps most importantly, a complete picture of the conflict would help to ensure that Kenya would not soon forget the atrocities that occurred throughout the nation.

The idea resonated. The group soon realized that a site like Ushahidi would be useful outside of just a singular conflict. Originally, they thought they would just open source the site’s code, which they eventually did (it’s up on GitHub now). But visitors poured into the site, and the media reached out to the team for interviews. There was more to be done.

In the coming months, the full team all quit their day jobs to work on Ushahidi full time. With the help of a few grants and an ambitious group of people, the platform was made more stable, simpler to use and easier to implement. Since 2007, it has been used in situations when the truth is vast and hard to account for, in places like the Congo and South Africa. It was utilized by Al Jazeera in Uganada and in the US, by the Obama campaign during the 2012 presidential election and in 2016 to chart citizen-reported instances of voter suppression. Since then, many of the project’s founders have continued work at the organization, though Okolloh continued her career elsewhere, first as Google’s policy manager for sub-Saharan Africa, and more recently, as Director of Investments at the Omidyar Network.

In 2017, an executive at Ushahidi was accused of sexual harassment by a former employee. The details were nothing short of appalling, made worse by the organization’s slow and ineffectual initial reaction. After an investigation, the executive was eventually fired, and Ushahidi has sworn to better uphold the principles that stand at the core of its very creation. But those that swear to uphold veracity and transparency need to be willing to shine that light of truth back on themselves. So I’ll end with the words of Okolloh, in a blog post she wrote in the days after the allegations surfaced called No Sacred Cows.

If we want to protect those working in these areas, or anywhere else, we need to protect the values of respect, equality and openness, not specific organisations or people. The start-up eco-system in Kenya is no longer nascent, it can and must handle the hard work and tough conversations that will happen in the coming weeks and months.

]]>https://thehistoryoftheweb.com/how-to-use-the-web-to-show-the-truth/feed/0Alt.zines and Memories of a Media Transitionhttps://thehistoryoftheweb.com/postscript/alt-zines-and-memories-of-a-media-transition/ Mon, 11 Feb 2019 12:46:00 +0000https://thehistoryoftheweb.com/?post_type=postscript&p=3031By Emerson Dameron In the late ‘90s, tiny magazines were having a moment. The popularity of underground punk and indie rock, together with the wide circulation of the review zine Factsheet Five, gave rise to zines, a subculture of shameless self-expression that was thriving around the same time millions of living rooms got their first […]

In the late ‘90s, tiny magazines were having a moment. The popularity of underground punk and indie rock, together with the wide circulation of the review zine Factsheet Five, gave rise to zines, a subculture of shameless self-expression that was thriving around the same time millions of living rooms got their first taste of dial-up AOL.

Perhaps the easiest way to explain now what a zine was then is to work backwards. Zines were like blogs on paper. They usually started as the work of a lone weirdo, dedicated to an unusual interest or mode of communication, putting out their work for a devoted cult following and vast commercial indifference.

Compared to reading professionally edited prose, reading a zine felt closer to corresponding with a friend. Few of these projects ever had anything close to fame or wealth. Most zine people had day jobs outside their creative pursuits. They included domain experts along with punks, drifters, dedicated artists, street people, and full-moon-howling maniacs.

In the confines of their zines, writers could vent without the threat of real-time pushback – readers were usually sympathetic as we had to go out of our ways to find these things in the first place. When writers accustomed to self-indulgence gathered in online conversations, they tended to be intense.

Through the early 2000s, zinesters got together on alt.zines, a Usenet group for small-potato pulp publishers. It was a fast-paced cascading discussion about print, occurring through an electronic medium, at a time when the rise of the web, particularly Craigslist, was dramatically rearranging the incentive structures of paper publishing. This mess of overlapping ironies only added to the hothouse atmosphere of the group.

Alt.zines was defined wit, venom, and strong opinions. It played host to sniping, drawn-out arguments, and callous indifference to feelings.

As someone who grew up in the sticks with no home access to cable TV, much less the internet, receiving new zines through the mail was the next best thing to hearing from one of the pen pals I made at summer camp. When I started college and got my own first reliable internet access, one of the first places I went was alt.zines. It was there that I ran into with the writers and publishers I admired most, the strange and sometimes volatile people behind Crank, Infiltration, Fucktooth, Pills a Go Go, Pathetic Life, and Crimewave USA.

It was through my engagement with alt.zines that I learned a lot about the social dynamics of the internet, particularly when strong personalities come online. I saw flamewars, pileups, and mean-spirited pranks, and throngs of false personae. I saw people I liked rip into each other, and sometimes felt them rip into me.

My first post was a clueless advertisement for the zine I was publishing at the time, an artsy, unreadable cut-and-paste mess called Pyramid Scheme. One of the regulars responded with a dismissive one-line crack about the name. For some reason, I stuck around, eventually dislodging my head from my ass, making better zines, and getting into the mix.

I stumbled into the group around the same time as Jeff Somers, who published The Inner Swine before moving on to a successful fiction-writing career. His sharp comic mind yielded some of the most insightful meta-level reflections on the comings, goings, and interactions of alt.zines.

A later copy of The Inner Swine

“It seemed like everyone posting there knew each other really well and had all these long-standing traditions and arguments,” recalls Somers. “It was exactly like that first day you showed up to an activity at school. Everyone turns to glance at you and then goes back to business. And so I tried desperately to be clever and super, super edgy in my first posts, which were all about how my zine was soon going to be a bible of sorts in a new world religion and everyone ought to pay attention to me. And no one did. The disinterest that alt.zines offered me was epic. But healthy. I learned to read posts and respond instead of just grandstanding. And slowly I got the rhythm of it, and got some comments, and even occasionally got sucked into the endless, exhausting debates and fights that clogged the newsgroup, though I tried hard not to. The main take-away for me was that all these idiots who showed up and tried to tell zinesters how they were supposed to be acting – to get organized, to fight the power, whatever – were met with scathing disinterest and hostility. It was a messy place for messy people and the last thing we wanted was to be told we’d joined a movement.”

We saw bare-knuckle librarian e-fights about ISDNs. We saw attempts at centralization (or at least explanation) and lots of internecine drama. Much of it involved the personal lives of high-profile zinesters, but some of it turned out to be more significant, especially that involving the struggles and chicanery of larger-scale zine distributors (which by then were showing signs of steep decline).

Most of the e-zines that were circulating in the late ’90s were markedly inferior in quality to the paper zines of the time. E-zine publishers sometimes posted their work on alt.zines, where it was routinely ignored and occasionally shredded by the regulars. Electronic publishing wasn’t taken seriously as a threat to this stout little medium or the surrounding community.

Digital e-zines often used simple, monochrome formats because of the simplicity of digital screens, image from Tom Warner

By the end of the ‘90s, a few notable writing talents were making their names solely through web publishing. Blogging was very much a thing. It was acknowledged reluctantly and circuitiously by alt.zines in 2002, in a threads such as “The Blog That Ate Cleveland.” The posters seemed to agree that the internet would never rid the world of paper publishing, before descending into a discussion of what happened when they did Google searches with each other’s name along with the word “asshole.”

Now, alt.zines is an unloved graveyard smothered in garbage. And while the internet did indeed decimate the alternative weekly publishing model and other large chunks of the print business, zines themselves are thriving. Younger zinesters are more creative than ever – now, zines are artisanal objects, and design, aesthetics, and the overall experience of the object matter just as much as the text. Zine makers still gather at festivals and discuss their hobby online, on reddit, Facebook, and elsewhere.

“Ultimately, what’s really weird is that the Internet didn’t kill paper publishing, or zine publishing,” says Jeff Somers. “It killed alt.zines. By the time of my last post (2010) it had been overrun with spam and bots as everyone fled for web forums and communities. Print? We still got that.”

Alt.zines is dead, but it was an interesting place to see a turning in progress.

]]>Warring Editorialhttps://thehistoryoftheweb.com/postscript/warring-editorial/ Mon, 04 Feb 2019 12:43:39 +0000https://thehistoryoftheweb.com/?post_type=postscript&p=2561By 2000, Salon had made quite the name for itself with their quick, pithy headlines and stories that posted all hours of the day, around the clock. Their coverage of President Clinton’s impeachment trial was particularly exhaustive, with up to the minute updates multiple times a day. It was the closest you could get to watching the […]

]]>By 2000, Salon had made quite the name for itself with their quick, pithy headlines and stories that posted all hours of the day, around the clock. Their coverage of President Clinton’s impeachment trial was particularly exhaustive, with up to the minute updates multiple times a day. It was the closest you could get to watching the whole thing live. Then there was their singular effort to bring to light the alleged hypocrisy of Henry Hyde, a member of the Senate Judicial Committee who was involved in a romantic affair of his own during the impeachment proceedings.

There were a lot of other news outlets that could’ve ran with the Hyde story, but Salon was the only one to give it a chance. Editorially, it was the kind of balance the magazine was known for striking – at once salacious and investigative – earning them an infamous notoriety in the press while simultaneously making them an oft-cited source by news outlets around the world.

The latter of these two distinctions was why they felt it necessary to cover the nascent presidential campaign of ultra-conservative Gary Bauer in early 2000. There would be plenty of others looking to them for reporting ramping up to the election. The former, their modern online chic and bawdy reputation, is why the journalist they chose was Dan Savage, media pundit, activist, at times bombastic and a self-proclaimed, proud leftist.

The result was a story titled Stalking Gary Bauer (pictured above). In it, Savage eviscerated the agenda and ideals of the homophobic candidate and his potentially malignant policies. To gain access, Savage signed up to Bauer’s campaign as a clandestine volunteer. The day before he started, he had contracted the flu. In an extravagant attempt to sabotage Bauer and his campaign, Savage “started licking doorknobs. The front door, office doors, even a bathroom door. When that was done, I started in on the staplers, phones and computer keyboards.”

It was a flashy bit of gonzo journalism. The kind of writing that had come to be expected from both Salon generally, and Savage specifically, as purveyors of a journalistic style that stood outside the mainstream – in tone if not in all out approach. It was a style familiar, emulated, and refined by a small cohort of online-only, early web adopter publications, not the least of which was Slate, Salon‘s occasional rival.

Salon and Slate had gotten their start around the same time, and under unusually similar circumstances. They were also competing for the same audience, and a bit of bravado was to be expected when each publication tried to prove that its slice of the pie was the biggest. Over the years a rivalry had emerged, and it was not at all uncommon to see shots fired across the bow from one publication to another in the form of jabs buried in online columns, sometimes even authored by David Talbot – editor of Salon – or Michael Kinsley – editor of Slate.

In 1999, Salon went public and the windfall from their IPO left them with cash to burn. In an incredibly short period of time, they expanded their staff, moved to new offices, and dramatically improved their readership. Some critics, however, saw the IPO as nothing but a black magic cash grab, warning that it would only end in collapse (they were right, of course. Salon, along with most of the web, would lose a ton of cash flow when the dot-com bubble burst). And wouldn’t you know it, one of the most vocal of these critics was Michael Kinsley.

Kinsley took to his weekly column in Slate to criticize Talbot, and Salon, for the supposed frivolity of a publicly traded news organization cashing in on its internet cred for a massively ballooned but short-lived valuation. By the end of the year, Kinsley made a habit of pointing out the discrepancies between the financial milestones Talbot would often tout in press interviews and the much-lower numbers public reports seemed to indicate.

Talbot’s response to these charges was routinely curt. Salon was a better publication, IPO or no. That’s why, at the end of 1999 and beginning of 2000 at least, they had more readers, better coverage and more mass appeal.

The feud escalated for months before coming to head in February of 2000, less than a month after Savage posted his flu-spreading expose. A particularly brutal exchange was chronicled in the New York Post under the garish headline (a style the Post has all but mastered), ONLINE DISSING MATCH ; SALON, SLATE HONCHOS’ SNIPING GETS PERSONAL. In it, Talbot, along with Salon‘s Vice President began with their usual logic: Salon was more popular because it was better. Period.

This set Kinsley off. In his response, Kinsley referenced Savage’s latest piece:

We have buzz, too — we’re not all that straightlaced. But we don’t have to go around licking doorknobs. . . .

This certainly didn’t help end the feud. Talbot fired back, labeling Kinsley as Bill Gates’ “house pet.” The rivalry was left hanging in the air, and the two went back to running their publications.

That is, for at least a month.

In March of 2000, Kinsley published an article on Slate titled “Nyeh Nyeh Nyeh“. In it Kinsely presented evidence which, in his estimation, proved that Slate had overtaken Salon in terms of monthly readership. He also went point by point through all of the attacks leveled against him trying his best to deflect or disprove each one.

Eventually, the fight fizzled out. Both were consumed by larger ambitions and mostly went their separate ways. But it’s always interesting to observe when a traditional pattern moves to a non-traditional medium. Rivalry between news publications is, after all, nothing new. It’s just interesting that it moved with news media’s transition to the World Wide Web. And perhaps even stranger that it kicked off with a few licked doorknobs.

]]>The Web Recap, January 2019https://thehistoryoftheweb.com/postscript/the-web-recap-january-2019/ Mon, 28 Jan 2019 12:52:30 +0000https://thehistoryoftheweb.com/?post_type=postscript&p=3017I like to cap off each month with a few links I’ve found from my research or around the web. Here’s some cool links I found this month. The Other Art History: The Forgotten Cyberfeminists of ’90s Net Art Loney Abrams takes us back to the early ’90’s, when the ubiquity and accessibility of the […]

Loney Abrams takes us back to the early ’90’s, when the ubiquity and accessibility of the web enabled artists to publish and distribute their work to the masses with very little friction. Specifically, Abrams focuses on women of the Cyberfeminist movement who merged code and art to create some incredibly stunning and out-there works of interactivity and motion that were years ahead of their time.

April 7th will mark the 50th anniversary of the Request for Comments, more commonly known as the RFC, a way for tech projects to organize and collect knowledge. To commemorate the occasion, Darius Kazemi, who some of you likely know from Tiny Subversions, is documenting the first 365 RFCs, one each day this year. The project is as ambitious as it is fascinating for history wonks like you and me, so if want to learn about what software projects were like 50 years ago, go ahead and dive in anywhere.

Part deep dive into Machine Learning, part tech thriller, data scientists Jasmine Greenway and Burke Holland have put their skills to the test to uncover one of social media’s most enduring mysteries: who is behind the @horse_js Twitter account. If you’ve ever wondered the answer (or even if you haven’t) you can follow Greenway and Holland as they walk you through the process of how a machine can tease out a digital identity. And… the answer made my jaw drop.

A reader recently reached out after my recent post about early blogs to point me to a list he’s been compiling for a little while. In the spirit of the many “awesome” lists that have been started on Github, Ben Read has been collecting his own list of some truly great developer blogs. If you’re looking for some new reading material, it’s worth a browse. And if you have something to add, you can submit a PR to add it to the list.

Anil Dash takes on the early days of social media on his new weekly podcast Function, from the team at Glitch. For their 11th episode, Dash brought together the stories of a few people that were on the web 20 years ago, experimenting with the very first social media tools, like LiveJournal, Pitas, and Open Diary. They have some fascinating insights into what the social web could have been, and the limitations that came with building for the early web.

If you haven’t heard it yet, I got a chance to go on Daily Dot’s 2 Girls 1 Podcast, to talk to Alli and Jen about the way I do my research and to talk about web communities, where they’ve been, and where they might take us next (and a few other cool stories as well).

]]>Comparing the “Why” of Single Page App Frameworkshttps://thehistoryoftheweb.com/comparing-the-why-of-single-page-app-frameworks/ https://thehistoryoftheweb.com/comparing-the-why-of-single-page-app-frameworks/#respondTue, 22 Jan 2019 12:47:29 +0000http://thehistoryoftheweb.com/?p=566The web is filled with rundowns of this JavaScript framework versus that one. But we don't often take the time to understand why they were built in the first place.

]]>The phrase “single-page application” has come, over the years, to mean both a particular type of website and a web development paradigm. A website could be considered a single-page application (SPA) when it is built to resemble a desktop application more than a traditional static web document, taking advantage of structured Javascript to connect to server-side services that add smoothness and dynamicity to your average web experience.

That means sites where users can both read and edit the content, and content itself is updated without a page refreshing. Think Gmail or Twitter.

The phrase itself has roots that stretch back to 2002 when a few engineers from Tibco Software actually patented the technique behind an early attempt at a single page application. It was around that same time that Slashdotslash.com came on the scene, one of the first web applications online, a sandbox for experimenting with new web technologies all in a single HTML document without having to refresh the page.

But things really kicked off for SPAs in 2005, when Jesse James Garett gave a name to AJAX, a technique that allowed developers to make dynamic requests to the server without loading a new page.

This timed rather organically with the rise of client-side frameworks like jQuery, Dojo, and Prototype, raising the profile of Javascript and stretching its limits.

Without these two trends, it’s unlikely that we would see the emergence of new single-page application frameworks, inspired by the likes of jQuery, but tweaked to take advantage of AJAX.

And if you search around enough you’ll see plenty of articles that dive deep into the technical considerations of one framework versus another, answering the question of how it does what it does best.

What you don’t see so much is the why.

So, I thought it might be fun to take a look at how developers described their own frameworks, at their conception or early on in its development, to try and glimpse the intentions behind them.

What will become abundantly clear is that each framework is a game of trade-offs. The ideology behind these frameworks plays heavily into the decisions made about how they would be structured, their programmatic API, and the footprint they leave behind.

Please keep in mind that this is in no way a comprehensive list, but I think it represents the trajectory of frameworks fairly well.

Backbone.js

Backbone.js aims to provide the common foundation that data-rich web applications with ambitious interfaces require — while very deliberately avoiding painting you into a corner by making any decisions that you’re better equipped to make yourself.

Backbone.js is probably where any conversation about SPA frameworks should start. It was developed by Jeremy Ashkenas in 2010 and aimed to give structure to what had become an unruly Javascript application landscape.

Ashkenas built Backbone on top of existing libraries, namely jQuery and Underscores. That’s where the idea for a “common foundation” comes from. The goal of Backbone was to unify and organize complex Javascript code in a way that made it reusable across projects and simpler to understand. So Backbone provides just enough structure to move programmer away from unwieldy “spaghetti code” and deal consistently with data on the server, but still leaves the bulk of decision making in the hands of individual developers.

AngularJS

I wanted to see if I could simplify this. But I didn’t want to simplify it for a developer, I wanted to simplify it for web designers. So people who don’t know how to program. And because people don’t know how to program, I had to be constrained to HTML.

AngularJS hit the scene right around the same time as Backbone, though it had been in development for some time before that. The intentions behind Angular were crystal clear. The target audience for the framework was designers, or at the very least, inexperienced developers. Most of the decisions about how the framework’s structure followed from this assumption.

Templates, for examples, could be created right in plain old HTML so that Angular users didn’t have to learn something new to get started. Angular also came with a few handy tools built right in, and encouraged an opinionated approach to development. All of this made the actual size and breadth of Angular much larger than frameworks that had come before it (like Backbone) but it also took the learning curve way down.

Ember

Ember is a JavaScript framework that does all of the heavy lifting that you’d normally have to do by hand. There are tasks that are common to every web app; Ember does those things for you, so you can focus on building killer features and UI.

Ember actually began as a rewrite of the web framework SproutCore, which had been popular around the time of Backbone and Angular, and was used by Apple on many of their web projects. But SproutCore languished a bit after 2012, and many developers recognized that it was time for change. So developers Yehuda Katz and Tom Dale began working on SproutCore 2.0, which became Amber.js, and eventually Ember.

Katz and Dale had a lot of experience in the Ruby on Rails community. For those unfamiliar, Rails is a server-side framework that prefers “convention over configuration.” This basically means that many decisions about how an application should be developed are already made by the framework giving individual developers a good head start.

This spirit informed the approach that Ember took. Ember’s creators reasoned that there was a whole lot of boilerplate code (getting data from a server, connecting routes to templates, breaking things out into partials, etc.) that developers needed to write over and over again for every project. So it did this work right up front, making a lot of assumptions about how the code works and abstracting it away.

As long as you stuck to Ember’s prescribed approach, a lot is done for you before you even write a single line of code. Katz even bragged that “if you’re a Backbone fan, I think you’ll love how little code you need to write with Amber.”

React

React is a library for building composable user interfaces. It encourages the creation of reusable UI components which present data that changes over time.

When React first launched, that’s how its developers described it. But they really summed it up like this:

React was created at Facebook to solve one very specific problem. When data is constantly changing and updating on a page (like with live updates stream in), things tend to get a bit slow. So they isolated the layer that was causing this problem, often referred to as the view layer, and went to work.

So for React, the why was simple. Speed.

Unsurprisingly, React is a framework in which all things follow from the data. When the data changes, things respond.

Quickly.

There are all sorts of algorithms (virtual dom anyone?) and even a new markup language named JSX that underpin the effort, but at the root, data is a first-class citizen. And as it turns out, speed not only gave React developers a clear goal to aim for but also a principle to benchmark against.

Vue

Vue.js is a library for building web interfaces. Together with some other tools you can also call it a “framework”, although it’s more like a set of optional tools that work together really well.

Vue, in many ways, began as a reaction (pardon the pun) to React. Creator Evan You recognized the advancements that React was able to make, but at the same time saw a community that was fractured and ever in flux (last one, I promise).

You initially resisted the name “framework” because he wanted Vue to be something that provided only the bare minimum out of the box. But to try and limit the splintering of the Vue community, You put a lot of efforts into modular, first party add-ons for the main Vue codebase. It blended the more prescriptive approach of frameworks like Angular and the flexibility of libraries like React to create a disparate set of tools that happen to work really well together.

Preact

For me, I wanted the developer experience [of React], I just didn’t want to pay for it. So that got me thinking.

Preact actually started out as a Codepen way back in 2015, a way for Jason Miller to experiment with some of the rendering limitations of React. But it didn’t truly come into focus until a few performance benchmarks were published online demonstrating the sluggishness of React on mobile devices, benchmarks that Miller’s quick and dirty experiment improved on greatly. So he released the code as the open source library Preact.

The stated goal of Preact has always been exactly above — all of the niceties of working with React with less of a performance cost (hence Preact). Since then, the library has been updated and retooled on more than one occasion, but it has always kept that purpose in the foreground, making use of React’s API’s while simultaneously making changes to the way it works behind the scenes.

Hyperapp

Hyperapp is a modern JavaScript library for building fast and feature-rich applications in the browser. It’s the smallest out there (1.4 kB), it’s simple, and fun to use.

“Small” is certainly the operative word for Hyperapp (originally called Flea). The codebase initially clocked in at around 4KB, but by the time of it’s 1.0 release, that number dropped down even more. Hyperapp gives you just the basics, a way of managing state and templates in your code, but its goal is to mostly provide a few tools and get out of the way. From the beginning, Bucaran has always emphasized the Hyperapp’s footprint and pragmatic approach as it’s foundational principles.

Conclusion

If there’s a lesson learned here, it’s that perspective that guides frameworks. Its design, its architecture, even the problem it is trying to solve follows from this perspective and sets a tone. From there, a community gathers around this perspective and catalyzes its efforts, and after a bit of a time, a new framework is born.

]]>https://thehistoryoftheweb.com/comparing-the-why-of-single-page-app-frameworks/feed/0An Early History of Web Accessibilityhttps://thehistoryoftheweb.com/accessibility-tools/ https://thehistoryoftheweb.com/accessibility-tools/#respondMon, 14 Jan 2019 12:56:04 +0000https://thehistoryoftheweb.com/?p=2725Accessibility is one of the foundational principles of the World Wide Web. Fighting to preserve that principle are the creators behind the most powerful tools, some of which still exist today.

]]>In 1995, Dr. Cynthia Waddell published a web design accessibility standard for the City of San Jose’s Office of Equality Assurance. It included a comprehensive and concise list of specifications for designers of the city’s website to strictly adhere to. The list included, among many other things, a requirement that all image tags be accompanied by alt text, all video and audio elements be paired with text transcriptions, and an explicit cap of only two columns per HTML table, to limit the damage that tables for layout did in a poorly supported browser and screen reader landscape.

If some of these seem commonplace and obvious these days, it is only because of this groundbreaking work by Waddell in the earliest days of the web. Other rules on the list, however, may seem a bit more unfamiliar, abandoned as a best practice somewhere along the line, occasionally at the expense of the users. But each rule is essential for making the web open and accessible to all.

The standard was years ahead of its time, predating the official Web Content Accessibility Guidelines (WCAG) of the W3C by almost half a decade. Browser’s were still in the days of HTML 1 with only the most basic of tags, Netscape and IE were locked in a browser war for market control, and styling the web in any way was still a few years off. It was the first of many efforts to formalize a standard for equal and open access on the web, a principle at the core of the World Wide Web’s ideals, if not always perfectly executed in practice. Waddell’s work soon influence that of accessibility experts everywhere in the industry.

Several years later, Waddell was serving as an Executive Director at the International Center for Disability Resources on the Internet (ICDRI), a non-profit imbued with the dual purpose of advocating policy changes at all levels of government and promoting accessible design best practices to the larger web community. While there, her work led her to aid in the development of a new tool by the software company HiSoftware that enabled automated testing of websites against popular accessibility standards. When it came time to name the tool, there really was no question. It was called Cynthia Says.

Cynthia Says started with a simple webform. Developers could visit the Cynthia Says site, input the URL of their own website, chose from a number of accessibility standards (Section 508, WCAG AAA, etc.) to compare against, and click “Submit”. After crawling the HTML of the URL, the site would give its users an accessibility report, a baseline overview of accessibility performance mixed with comprehensive details about where they went wrong (and right). One by one, site developers could verify their own design against criteria of their chosen standard. Each requirement was flagged as either a “pass” or a “fail”. If for any reason, a requirement was failed, users were given tangible next steps for getting the issue resolved.

Sample report from Cynthia Says

Like the work of Waddell herself, the goal of Cynthia Says was both to provide developers with a list of best practices and to educate them on the nature of the issue themselves. Without understanding and context, it was all too likely that site designers would fall into the same bad habits again and again. The information provided in the reports was far from trivial. If you were, for instance, checking your site against Section 508 compliance, language would be pulled directly from the legalese of the legislation, with a fleshed out description under each. In most cases, multiple solutions would be provided.

At a time when the web was struggling to understand accessibility, Cynthia Says offered salvation, a concrete path to success. And yet, it wasn’t the only tool out there doing that.

Bobby was released as early as 1997, in the wake of legislation that updated Section 508 to bring stricter rules to the practices of public web design projects. And though it predated Cynthia Says by a few years, Bobby similarly offered users a way to crawl their sites for accessibility issues. And like Cynthia Says, users were then presented a report about errors or warnings related to accessibility across that site.

Upon release, the tool’s list of potential errors and warnings were little more than a handful of rules cobbled together from best practices promoted by experts in the industry. As legislation and standards evolved, Bobby grew with them, eventually letting users test their sites against a number of different rulesets.

Bobby’s approach was a bit different from Cynthia Says, but it offered the same basic report

These reports were centered almost entirely on the fixes required, diverging slightly from the more educational route of Cynthia Says. Still, you had to hand two things to Bobby. The first was that users could upload a custom HTML file, and starting in 2001 even purchase a version of the software that could be run locally against sites in development.

The second was the Bobby badge.

In general, the rules Bobby checked against were labeled as Priority One, Priority Two, and Priority Three, with the first priority to mark fixes that were essential and required, and the third priority to mark issues that need a manual fix. If your site had any Priority One issues, it would be marked on the report with this icon (British police officers are occasionally referred to as “bobbies”, hence the pun):

If your site passed without any Priority One issues, you’d be presented with the Bobby badge. The badge looked like this:

If your site was “Bobby Approved,” it meant that your its content was well-structured and well-intentioned. Sticking the badge on your site was meant to both as a personal boast and a slight nudge to peer pressure other web developers into doing the same. And to some extent, that strategy worked. There was a time on the web when every respectable developer made sure to include the Bobby badge in their footer’s design.

Over the years, Bobby continued to evolve into a dynamic tool that could be used in and out of the browser, and could test across different run times and environments. It remained incredibly popular until the software was officially sunsetted in 2005.

Then there’s WAVE, or as it was originally called the WAVE. Among these other tools, WAVE gets a mention for its sheer longevity. Never as popular as some of the alternatives in the earliest days, WAVE has, over time, managed to outlast them all.

The WAVE began as a research project by Dr. Len Kasday at Temple University in early 2000. After a couple of years of development, WebAIM took over the project and continued to make improvements.

WAVE offered a similar experience to other tools on the market. It validated sites against popular specifications and listed out known issues for developers to assess. WAVE set itself apart by getting into the browser extensions game just about as early as such a thing was possible.

Even the first versions of WAVE came bundled as extensions that could be installed on browsers and run from inside of any page with a click of a button. Once clicked, a report would fly out from the left side with plenty of details. Rather than making you come to it, WAVE would follow you all around the web. It was a simple idea, but opened up whole new possibilities.

Adding Wave to your browser, even in its early days, was simple

That portability remained central to the development of WAVE. It has continued to evolve into more than just a browser extension, extensiblity always core to its mission, with a programmatic API that can be accessed from anywhere, and more advanced in-browser tools. That portability seems to have given WAVE the edge; it is still in active development today.

We don’t often give much thought to our tools (that is, unless their broken). That makes it easy to forget that they are forged through experience, and shaped by community. Accessibility is hard. We have our tools, and the expertise of the people behind them, to thank for making it a little bit easier.

]]>https://thehistoryoftheweb.com/accessibility-tools/feed/0A Toast to Some Great Blogshttps://thehistoryoftheweb.com/postscript/lemonyellow-intellectual-diary/ Mon, 07 Jan 2019 12:53:29 +0000https://thehistoryoftheweb.com/?post_type=postscript&p=2941The web has been… unpredictable. We usually think it will go one way, only to see it go another. Case in point. There were plenty that believed major media organizations would find their place on the web medium. What we didn’t expect so much was this totally unpredictable outgrowth of personal and boundless creativity, a […]

]]>The web has been… unpredictable. We usually think it will go one way, only to see it go another. Case in point. There were plenty that believed major media organizations would find their place on the web medium.

What we didn’t expect so much was this totally unpredictable outgrowth of personal and boundless creativity, a string of blogs and bloggers that became the web’s gatekeepers, trendsetters and evangelists at a time when no one quite knew what to do with it yet. These were regular people writing regular stories that could not be more unique. And every once in a while, a blog broke through. It captured the imagination and attention of the web’s hungriest readers and spread from blog roll to blog roll. You might bookmark it, jot the URL down on a post-it note, really anything to make sure you could visit them day after day, hoping you got there after a new post was published.

I hope this blog is something like that for you. I’d like to look at two others.

Heather Anne Halpert named her blog LemonYellow for, she recalls, no particular reason. Created in the spring of 1998, LemonYellow came into being before the word blog began circulating through media circles and entered into the common vernacular. So novel and indescribable was her website that when it was profiled in the New York Times, writer Katie Hafner referred to the site as an “intellectual layer cake.” Blog may be how we refer to it these days, but intellectual layer cake just about nails it.

A screenshot of LemonYellow, somewhere close to the end of its run

Each day, Halpert would post a few concise thoughts to LemonYellow. Rather than stick to a theme or a pattern, Halpert filled LemonYellow with a pure stream of consciousness so broad it managed to cover philosophy, technology, the English language and the inner turmoil that comes with waking up as a human being. You might, for instance, find her quipping on her use of punctuation in the same breath as a book recommendation or a pithy quote,

I’m not going to defend myself for poking around in areas quite obviously outside those of my own expertise. Neither will I defend my mid-Victorian obsession with italics… Mainly because it’s indefensible. Just be thankful for my fortitude in the face of that most seductive of punctuation, that siren of syntax…the exclamation point.

Halpert used her blog to broadcast her thoughts and feelings about anything and everything. She cultivated LemonYellow as a place of discovery, not just of the inner recesses of her own mind, but of the many fascinating things she herself had found while probing and searching through the vastness of the web and beyond. When she found a new site, or an interesting blog post, she’d post a link and a brief comment, a practice that would soon become known as the link blog. When she read a new book, she wrote a quick review. And when someone mentioned something funny or particularly insightful in casual conversation, she would post a recap. On occasion, she’d even post nothing more than just a “I still don’t have my laundry,” a final punctuation to a stream of thoughts.

Halpert, however, was a software designer by trade, and many of her best contributions were a prescient and insightful look at the a user’s experience of our designs we take for granted every day,

I’m interested in both imposing patterns on disparate pieces of information; and, of course, looking for existing patterns. However the former is more interesting in that it is, according the the basis of the (empirical) scientific theory, taboo. It involves manipulating one’s data to fit a predefined pattern. However, it can produce rare and beautiful results. For example, imposing a grid will make vivid the shredded spots in the woof and warp of a relationship otherwise taken for granted. Change the data, impose the same grid and the meaning of the relationship is entirely new.

LemonYellow picked up steam, enough, at least, to be picked up in the New York Times Technology blog. The web was an entryway for more than a few readers. Buried in its links and one-off repartee was a deep connection, a rare glimpse into the truth of human experience. A way to find something new, something different, something you never thought of. It remained that way until 2001, when Halpert closed down the site for good.

Textism in its later, plainer form

Dean Allen was very much of the web, the kind of person, you might say, that the web was created for. Creative, open, clever and deeply, deeply concerned with the state of digital typography. In late 2002, Allen created the Textile markup language, an early precursor to Markdown that made writing structured, semantic HTML as simple as learning a few keyboard shortcuts. In 2003, he publicly released TextPattern, a blogging tool with the slogan built for content publishers to Just Write. Allen was no stranger to writing himself. Both of these projects he announced on his own blog, Textism, his own personal website and almost-daily-updated blog.

Allen didn’t deal in the kind of brevity you might find on LemonYellow, but his thoughts were no less wide-ranging or insightful. He was, after all, a great writer who loved to write, and his blog reflected his hobbies, his passions; every facet of his being was represented. Before the days of the self-conscious brand forming and carefully crafted identities, Allen was comfortable just being himself, and wrote about whatever it was that struck him. Textism was an extension of his unique personal voice, one that was sharp and witty and bounced around from tech to literature to, as Jason Kottke recalls, how to cook a great stew,

First, you need some water. Fuse two hydrogen with one oxygen and repeat until you have enough. While the water is heating, raise some cattle. Pay a man with grim eyes to do the slaughtering, preferably while you are away. Roast the bones, then add to the water. Go away again. Come back once in awhile to skim. When the bones begin to float, lash together into booms and tow up the coast. Reduce. Keep reducing. When you think you have reduced enough, reduce some more. Raise some barley. When the broth coats the back of a spoon and light cannot escape it, you are nearly there. Pause to mop your brow as you harvest the barley. Search in vain for a cloud in the sky. Soak the barley overnight (you will need more water here), then add to the broth. When, out of the blue, you remember the first person you truly loved, the soup is ready. Serve.

Each post on Textism was carefully crafted, many of them written as well as any novel. His goal in life was to enable others to do the same. At the heart of Textism was Allen’s fascination with the many possibilities of the World Wide Web. He believed strongly in its ideals, and he often wrote about technology with an understanding and a tone that very few before or after have managed to do. He was not so interested in the technical particular’s of his projects, but rather the things which they enabled people to do. He brought a human element to technical writing, even in the most banal of situations. Slipped into an update about his CMS Textpattern, there might, for instance, be a parenthetical that revealed more about the process of making software than can be found in the most lengthy and detailed post-mortem.

Some time in the last couple weeks, while working on Textpattern (you know, the CMS I’ve been using on this site for two years, the one that was running like a finely tuned and greased machine, the one I decided to release to the world, whereupon I was seized by a strange and insistent demon who was of the opinion that simply doing one thing and doing it well was not enough, boy, you need to attach a full-fledged browser-based HTML and CSS editing monster that would do several things in a kind of so-so sort of way but it sure would impress the twelve people to whom such a thing would even make any sense, and whereupon I tapped out lines of inelegant PHP code until droplets of blood formed on my forehead and I was hoarse from screaming well why the fuck not at a computer screen every time something refused to work – and of course things don’t work, things don’t like to work – and on it went until I was sufficiently unstupid to pause and grasp that having something that does one thing well is a good deal better than having many things that are just sort of so-so, and hey there’s all this time in the future to add those things when and if they do work, and I began the relatively swift process of dismantling all those flights of fancy until I arrived at the point I am now, which is ready to release a public beta) I discovered that one of the character-conversion utilities I was working on had potential use in the Word HTML Cleaner, which had been giving people problems lately. So I installed it and it seems to work.

It is, perhaps, unsurprising then that Textism was incredibly popular with the early-to-the-web tech crowd. Many came back to his blog day after day to read about whatever it is he wanted to write. More often than not, Allen was advocating for better typography and more careful design. He wanted the web to be beautiful. He wanted people to take care with what they what published, to be proud of what we are all building together.

There’s this cliche that’s sometimes passed around. No one’s quite sure if it was Brian Eno or Lou Reed that first said it. It goes something like: the Velvet Underground didn’t sell many records, but everyone who bought one started a band. Textism is a bit like that. Textism wasn’t the most popular site on the web, but it served as a template for the indie web of the early 00’s, and has been cited as a major influence by Jason Kottke when he created Kottke.org, and Jon Gruber when he got going with Daring Fireball. It was the blog of choice for many, many tech bloggers. Allen, unfortunately, passed away in early 2018. His legacy, however, will not soon be forgotten.

The web is a fascinating technology. I myself am frequently enamored with its inner workings. So much so that it’s easy to forget that it’s simply the medium. It’s what we fill it with that counts. And when someone takes the time to add something extraordinary, we should all take the time to appreciate it.