Like the ampersand, the ‘@’ symbol is not strictly a mark of punctuation; rather, it is a logogram or grammalogue, a shorthand for the word ‘at’. Even so, it is as much a staple of modern communication as the semicolon or exclamation mark, punctuating email addresses and announcing Twitter usernames. Unlike the ampersand, though, whose journey to the top took two millennia of steady perseverance, the at symbol’s current fame is quite accidental. It can, in fact, be traced to the single stroke of a key made almost exactly four decades ago.

In 1971, Ray Tomlinson was a 29-year-old computer engineer working for the consulting firm Bolt, Beranek and Newman.1 Founded just over two decades previously,2BBN had recently been awarded a contract by the US government’s Advanced Research Projects Agency to undertake an ambitious project to connect computers all over America.3 The so-called ‘ARPANET’ would go on to provide the foundations for the modern Internet, and quite apart from his technical contributions to it, Tomlinson would also inadvertently grant it its first global emblem in the form of the ‘@’ symbol.

The origins of the ARPANET project lay in the rapidly advancing state of the art in computing and the problems faced in making best use of this novel resource. In the early days, leaving a ruinously expensive mainframe idle even for a short time was a cardinal sin, and a so-called ‘batch processing’ mode of operation was adopted to minimise down time. Each computer was guarded by a high priesthood of operators to which users submitted their programs (often carried on voluminous stacks of cards) for scheduling and later execution.4 The results of a such a ‘batch job’ might arrive hours or days later, or sometimes not at all: a single error in a program could ruin an entire job without any chance for correction. As time wore on, however, processing power grew and costs fell — by the mid-1960s, room-sized mainframes had been joined by newly compact ‘minicomputers’ measuring a scant few feet on a side5 — and the productivity of users themselves, rather than of the computers they programmed, became the greatest problem. In place of batch processing arose a new ‘time-sharing’ model wherein many users could ‘talk’ at once to a single computer, typing commands and receiving immediate feedback on their own personal terminal.6

The most common terminal design of the era was the ‘teletype’,* a combined keyboard and printer on which a user could type commands and receive the computer’s printed reply.7 There were terminals which used other means to display input and output — notably cathode ray tubes, or CRTs — but teletypes were near-ubiquitous, spawning hardened military versions and 75-pound ‘portables’.8 Unlike today, where a keyboard and display normally occupy the very same desk as their host computer, teletypes were routinely separated from their hosts by hundreds of miles; a teletype might just as easily be in the next city as the next room, communicating with its host computer over a telephone line.

Despite this ability to be geographically distant from its host, each terminal was still inextricably tethered to a single computer. Moreover, many models of computer understood different sets of commands, making it difficult or even impossible to move programs from one model to another. Robert Taylor, the head of ARPA’s Information Processing Techniques Office, was well acquainted with both problems. His office contained three teletypes connected to three different computers in Santa Monica, Berkeley and MIT respectively, and each one spoke its own unique dialect of commands. Taylor said of the situation:

For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. [of Santa Monica] and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them.9

Thus, despite their ever-increasing complexity and utility, for the most part computers still lived in splendid isolation. It was the combination of these factors — the attractions of ever-increasing power and flexibility, impeded by a frustrating inability to share information between computers — which spurred ARPA to investigate a network linking many computers together. As Taylor concluded:

I said, oh, man, it’s obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPANET.9

In 1968, the agency solicited bids from 140 interested parties to build the experimental network.10 Although it would not be the first computer network, it was by far the most ambitious: not only would it span the continental United States (and, eventually, cross the Atlantic via satellite link11) but it would be the first to use a novel and as yet untested technique called ‘packet switching’ on a grand scale. Packet switching relied not on a direct connection between sender and recipient, but instead sent messages from source to destination by a series of hops between sites adjacent on the network, fluidly routing them around broken or congested links.12

Some of the heavyweights of the time did not even bid. IBM, firmly wedded to the traditional (and expensive) mainframe computer, could not see a way to profitably build the network,13 while Bell Labs’ parent company AT&T flatly refused to believe that packet switching would ever work.14 In the end, an intricately detailed 200-page proposal submitted by relative underdogs BBN secured the contract, and construction of the ARPANET began in 1969. The project was a success, and by 1971 nineteen separate computers were communicating across links spanning the United States.15

An internet router, circa 1965: a wardrobe-sized Interface Message Processor, or IMP, modified by Bolt, Beranek and Newman from a Honeywell DDP-516 ‘minicomputer’. Each node in the fledgling ARPANET was connected to an IMP. (Image courtesy of Steve Jurvetson.)

Working in BBN’s headquarters, Ray Tomlinson had not been directly involved in building the network, but was instead employed in writing programs to make use of it.16 At the time, electronic mail already existed in a primitive form, working on the the same principle as an office’s array of pigeon-holes: one command left a message for a named user in a ‘mailbox’ file, and another let the recipient retrieve it. These messages were transmitted temporally but not spatially, and never left their host computer — sender and recipient were tied to the same single machine.17

Taking a detour from his current assignment, Tomlinson saw an opportunity to combine this local mailbox system with the fruits of some of his previous work. He used CPYNET, a command used to send files from one computer to another, as the basis for an improved email program which could modify a mailbox file on any computer on the network, but the problem remained as to how such a message should be addressed.16 The recipient’s name had to be separated from that of the computer on which their mailbox resided, and Tomlinson was faced with selecting the most appropriate character to do so from the precious few offered by the keyboard of his ASR-33 teletype.

Looking down at his terminal, he chose the ‘@’ character.

One of the ubiquitous Teletype model ASR-33 teleprinters. Unlike modern keyboards, the ‘@’ symbol shares a key with the letter ‘P’. (Image courtesy of Steve Elliott on Flickr.)

With four decades of email behind us, it is difficult to imagine anyone in Tomlinson’s situation choosing anything other than the ‘@’ symbol, but his decision to do so at the time was inspired in several ways. Firstly, it was extremely unlikely to occur in any computer or user names; secondly, it had no other significant meaning for the TENEX operating system on which the commands would run,† and lastly, it read intuitively — user ‘at’ host — while being just as unambiguous for the computer to decipher. His own email address, written using this newly formulated rule, was tomlinson@bbn-tenexa, signifying the user named tomlinson at the machine named bbn-tenexa, the first of the company’s two PDP-10 mainframes running TENEX.18

With the modifications to his mail program completed and an addressing scheme decided, Tomlinson typed out a brief message on the second machine and sent it to his mailbox on the first. The message was broken down into packets as it entered the ARPANET, which then routed each packet individually to its destination and reassembled them into a complete message at the other end, before it was finally appended to his mailbox on bbn-tenexa. In real terms, the two machines occupied the same office, and the first network email travelled a physical distance of only around fifteen feet.19 Perhaps appropriately to this anticlimactic first step, Tomlinson has since forgotten the contents of the message:

I have seen a number of articles both on the internet and in print stating that the first email message was “QWERTYUIOP”. ’Taint so. My original statement was that the first email message was something like “QWERTYUIOP”. It is equally likely to have been “TESTING 1 2 3 4” or any other equally insignificant message.20

Half-fearing the wrath of his superiors were they to discover his pet project, Tomlinson initially kept his invention to himself. As a colleague recalled, “When he showed it to me […] he said, ‘Don’t tell anyone! This isn’t what we’re supposed to be working on.’”21

His concern was misplaced: email quickly became the fledgeling network’s first ‘killer app’, gaining influential converts such as ARPA’s director Steve Lukasik. Lukasik took to travelling with one of the aforementioned ‘portable’ teletypes so he could check his mail even when out of the office,22 and his managers quickly found email was the only reliable way to keep in touch with him.23 This viral quality led to an explosion in the use of email across the network, and by 1973 — only two years after the first email was sent from one side of Ray Tomlinson’s office to the other — it accounted for three-quarters of all traffic on the ARPANET.24

Tomlinson’s makeshift programs were replaced many times over as the ARPANET expanded and ultimately evolved into the modern Internet, but the use of the ‘@’ symbol remained a constant. As one half of an indivisible double act with the world wide web, email became synonymous with the Internet as a whole, and the ‘@’ symbol’s place in history was assured.

How, though, how did the ‘@’ symbol find its way onto the keyboard of Ray Tomlinson’s ASR-33 teletype and so pass into Internet history? Moreoever, where did it come from in the first place?

This type of combined keyboard/printer terminal device is more correctly called a ‘teleprinter’ or ‘teletypewriter’.7 ‘Teletype’ is an example of synecdoche, named for a prominent teleprinter manufacturer of the time. ↢

†

The ‘@’ symbol did have some unfortunately incompatible meanings for other operating systems. Perhaps the most infamous was ‘erase all preceding characters on this line’ on Multics, the predecessor to UNIX. ↢

Comment posted by Jens Ayton on July 24, 2011 at 1:54 pm

“Remained a constant” is a bit strong. Various other e-mail schemes with different addressing format were widely used, notably UUCP with its “bang paths” – somehost!otherhost!yetanotherhost!finalhost!user – before being the grand unification into SMTP.

Comment posted by Keith Houston on July 24, 2011 at 3:26 pm

Hi Jens,

I may have worded that badly! I meant to suggest that use of ‘@’ as an email address separator (independently of the standards which have adopted) has remained in use since Tomlinson’s invention of it. I’m intrigued to hear about UUCP ‘bang paths’, though — I’m not particularly familiar with UNIX, so it’s interesting to hear about other such schemes.

Comment posted by Jeffery Leichman on July 25, 2011 at 2:15 pm

I’d only seen those ‘bang paths’ once before, in 1995, for a fellow who had a VERY old address with ATT. At first I was skeptical, and not even sure that my e-mail client would accept it. But, it worked. The problem, I believe (I may be wrong), is that the machine addressed must always be on, and always ready to accept mail; there’s no holding queue, like there is with POP.

Comment posted by Leif Neland on July 26, 2011 at 2:55 pm

UUCP still works nicely, now also over tcp/ip instead of just modems originally.

It is useful, where you have a mailserver, which can not be reached, either because it is behind NAT, or it is on dialup.

A preferably always-on host is used as MX, temporary holding the mail, until the mailserver calls the uplink, receiving the held mail.

This scheme is preferable to eg fetchmail, where the mail is put in a regular mailbox at the uplink, and fetchmail retrieves the mail via pop3 or imap. There are problems with this aproach, one is that envelope adresses are lost, causing problems with mailing lists.

With uucp, this problem does not exist; uucp can compress the mail during transmission, and is more robust to transmission errors.

Comment posted by Ray on July 26, 2011 at 11:52 pm

I was thinking about the bang schema of email addresses, as that was my first exposure of getting email from me to a friend in California. And it could take a day to route and another coming back, based on when my message hit a system and when it was scheduled to connect up with the next one.

Comment posted by Keith Houston on July 24, 2011 at 5:08 pm

Comment posted by John Cowan on July 24, 2011 at 5:11 pm

A few notes:

1) It’s not quite true to say that in 1971 all teletypes were permanently wired to specific computers. AT&T (aka The Phone Company then) would not allow random devices to be connected to their network, but acoustic couplers, which had no electrical connection, were commercially available as early as 1968, permitting suitably equipped terminal devices to be connected to any computer that could be dialed up.

2) As Keith’s message implies, a “bang path” was a form of relative rather than absolute mail addressing. The Arpanet was a closed system, with a definite and fixed list of hosts, each of which had a name. BBN maintained the list of names in a file called HOSTS.TXT, the ancestor of /etc/hosts on Unix and C:\Windows\System32\drivers\etc\hosts on Windows systems today. When distributing this file to all Arpanet hosts became unwieldy, the DNS we use today was invented.

The uucp network, however, grew by local connections, and there was (until a late stage) no general table of all hosts. So when I was on a host named magpie in the 1980s, I could route mail to someone on its immediate neighbor marob with “marob!someone”, and they could reply to me as “magpie!cowan”, merely because the system administrators of magpie and marob had arranged to call one another periodically to exchange email, and without reference to any other central authority whatsoever. Things got more interesting when I wanted to go beyond marob. One of its neighbors was phri at the Public Health Research Institute in New Jersey, which in turn could reach any of allegra, philabs, cmcl2, or rutgers. The last of these was on the Arpanet, making it possible for me to reach anyone on that network with a bang-path like “marob!phri!rutgers!mit-xx!someone”, where “mit-xx” was an Arpanet host. Philabs, on the other hand, led into the core of the uucp network: via linus, it could reach decvax, and from decvax, any of such major sites as seismo, ihnp4, and others that “everyone” knew paths to.

So rather than having a fixed address, one published a partial “bang path” relative to well-known addresses: mine was “…!phri!marob!magpie!cowan”, and it was assumed that anyone knew or could find out how to reach phri. Later, things changed and my partial path became “…uunet!hombre!marob!magpie!cowan”, uunet having taken over seismo’s role as major hub. The system was unstable and messy, and email went hop-by-hop, often taking a day to a week arrive, depending on the schedules of phone calls between systems. But it was a whole lot better than nothing.

Comment posted by Keith Houston on July 24, 2011 at 5:20 pm

Hi John,

Illuminating as always! Thanks for the comment.

My wording with relation to 1) is a little unclear — I should have said “tethered to a single computer, to the exclusion of all others, for the duration of a connection”, or something similar. If I get a chance to expand on this in the future, it might be interesting to delve into acoustic couplers, modems, and the legal intricacies of the AT&T monopoly.

Comment posted by Bob Kerns on July 26, 2011 at 1:05 am

Nice explanation, John.

I’ll point out just one minor correction. The problem wasn’t that there was a fixed set of hosts, exactly, and it wasn’t DNS that was central to eliminating ! paths.

The issue is that the original rules for who could be on the ARPAnet, and the costs associated with establishing a connection, meant that many people who wanted to exchange email, were on systems which weren’t on the ARPAnet.

The host names issue was a scaling issue, but that really only became a major problem later, long after ! paths were invented. When the regulations relaxed, so that other groups within universities, and ultimately commercial entities, could joint the Internet, and when the cost of doing so (even if via dialup internet service) dropped with the rise of ISPs, UUCP became redundant.

One of the headaches with the ! paths is they were relative to some specific starting point. Life was rough for you folks on UUCP, and those of us managing mailing lists had to be aware of the ins and outs, deal with the bounced messages, or user complaints of messages silently lost or long delayed. Usually, there was nothing we could do, but suggest they track down each administrator along the way.

SMTP is designed to support store-and-forward operation in much the same mode (but internally managed). However, virtually all mail now goes from sender’s client to recipient’s mail server to destination SMTP server to recipient’s client, perhaps with a spam filter thrown in the middle. The universal connectivity brought by IP/TCP has allowed a much more direct and simple (and immediate and reliable) propagation of email.

Comment posted by Bob Kerns on July 26, 2011 at 1:18 am

DNS is why we have “.com”, “.edu”, “.org”, and “.net” — and a whole lot more, now.

I worked for the very first .com company — Symbolics.com — when DNS was first instituted.

Before that, host names were things like MIT-AI (the MIT AI lab) or SAIL (The Stanford AI lab), each naming a specific machine (with aliases) at a specific network address, rather than based on organization. So you had to know specifically which system your recipient received his mail on, though usually there were suitable forwardings for most people. Now, with DNS and MX records, it’s decoupled from what systems actually handle the email, and often it’s not even a system operated by the recipient’s organization (perhaps, say, Google).

There were a lot of arguments about whether names should be like they were in UUCP — from less specific to more — or as we ended up going with — from more specific to less. We could have had com.google instead of google.com! (For practical reasons, some applications that reuse domain names for other purposes, such as the Java programming language, do reverse them.).

I think the telling argument may have been that UUCP wasn’t really from more specific to less, but just better-connected to less-well-connected. And people really wanted to break from that mindset, and emphasize that these were names in a naming scheme, and not at all paths.

There was another scheme used as well, using % instead of ! This wasn’t a full path-based scheme, but a forwarding scheme used at some sites.

So you might have a name like foo%somevax@somehost.berkeley.edu — inside Berkeley there would be a non-public host named somevax, with a user named foo. somehost.berkeley.edu would take care of rewriting mail headers both ways, and forward mail back and forth between somevax and the internet.

Comment posted by Graham Yarbrough on July 27, 2011 at 12:56 am

I would point out that both ATT and Western Union — yes! _that_ Western Union, the one that used to send TELEGRAMS — offered a dial-up teletype service in the ’60s and ’70s [after all, that was the purpose of teletypes. To allow people to send _written_ messages to others just like they could place telephone calls. Indeed, for those of us who were around then, there was the infamous area code 710 reserved for teletype machines/numbers]. There were also teleptype MODEMS that one could connect to the computers of the time. I wrote far too much code using and for the teletypes of the day! 35 baud, 5 and 7 track PAPER tape. Frequently all upper case only character set!
But to the topic of John’s comments: Many teletypes were dialed into their associated computer; they need not be hard wired to any specific one but could be used by many people and connected to several computers. Indeed, I used many teletypes for programming which we dialed into any number of computers of the day. The old GE-650 running GEOS — or was it GECOS? That really is a memory stretch — was one of my favorites. Many teletypes were used by their owners both for sending “normal” messages to other teletypes and for sending programs –usually prepared off line on paper tape and sent to its connected host — as you were usually charged a stiff connect fee — not counting the considerable long distance charge associated with the call itself — so we tended not to sit and “type” in real-time at the keyboard..
As for all the rest here, I can only offer that it brings back many memories.. We’ve been doing this far too long!

Comment posted by Sean MacGuire on July 24, 2011 at 10:01 pm

… and each machine running UUCP (unix-to-unix copy program) had a list of hosts it knew about which it called periodically, mostly hourly, and generally using a dialup modem.

The uucico program did the dirty work of sending the information to the remote system (uucico -r1 -x9 -ssystem-name is burnt into my brain) because there would often be horrible problems requiring manual debugging.

Comment posted by Michael K on July 24, 2011 at 10:44 pm

Moin,

I might to add the % symbol in eMail as a bridge between UUCP and SMTP based mails. My internet mail address at that time was kraehe@bakunin.north.de, now when someone was polling UUCP at my site and wanted to be contacted by eMail his address had been fauxuser%paxhost@bakunin.north.de. So eMail would be first delivered to my host, and then converted into paxhost!fauxuser for UUCP.

Comment posted by Hubert Canevet on July 27, 2011 at 12:48 pm

Comment posted by Joel Rees on July 25, 2011 at 1:18 am

A couple of thoughts,

I thought your explanation that batch mode was adopted to minimize downtime was rather a reversal of temporal order, but maybe not.

Some people do claim that the early computers were operated interactively by their operators (directly in machine code!), and that batch mode is what the early operators invented to allow them to organize job queues for all the clients who wanted programs run.

But forty years ago, the accepted story was that batch mode came first. I think this was because batch mode job control languages were produced before interactive job control languages. The human job queue methods were programmed into the job control languages, so the first job control languages were batch-oriented. Also, batch-mode offers fewer opportunities for ambiguities and error, and is thus a lot easier to program and uses fewer system resources.

It was only later that users were allowed to touch terminals and issue batch control commands, and it was still later that the job control languages shifted from batch-control to interactive use.

About the at symbol, you are aware that wikipedia has a good page on the “at symbol”? It definitely had plenty of uses and plenty of reason for being on many of the old teletypes.

Comment posted by Keith Houston on July 26, 2011 at 8:24 am

Hi Joel,

Thanks for the comment. You’re right — the situation was more complex than simply “batch processing, then interactive”, although I found it tricky to get the balance right between too little detail and too much. Still, I hope I you enjoyed the article!

Comment posted by Keith Houston on July 26, 2011 at 8:25 am

Comment posted by Jorge on July 25, 2011 at 6:01 pm

This is interesting, but actually the @ symbol was used for many years before computers came into existence. It is not a creation of modern technology. In Spain and other Latin American country the @ symbol has been associated with “Arroba” (Spanish and Portuguese customary unit of weight, mass or volume. Its symbol is @. In weight it was equal to 25 pounds (11.5 kg) in Spain, and 32 pounds (14.7 kg) in Portugal.)
I remember in the early 70’s the @ was used in Cuba to represent “arrobas de caña” the C=cañas or sugar cane and the “a” representing the “arrobas” collected.
See Wikipedia definition of Arroba (http://en.wikipedia.org/wiki/Arroba) so in reality @ is a symbol used to measure weight and it was invented before the computer.

Comment posted by Keith Houston on July 26, 2011 at 8:28 am

Hi Jorge,

Thanks for that! I’m intrigued by the Cuban usage you mentioned — do you have any references for that? It’d be great to be able to include it in part 2, which will be talking about the ‘@’ symbol’s life before computers.

Comment posted by The Modesto Kid on July 26, 2011 at 10:25 am

The first time I remember understanding what @ meant (I had learned to call it “the at sign” already but that was just a proper name to me, without any meaning) I was reading an old Pogo cartoon and read a panel where the Crane (manager of the general store) is weighing some character’s (Miz Beaver’s?) purchase and says something like, “That’s 1 1/2 lbs. @ 13¢”

Comment posted by Gus Gustafson on July 26, 2011 at 12:20 pm

Comment posted by Alfons on July 26, 2011 at 12:33 pm

The @ symbol was a unit of measure and also the way to say “units of” or “at the price of” in Spain. In spanish, its name is “arroba”. When put into my country’s (Catalonia) computers, the discussion was between calling it “arrova” in consonance to the weight measure, or another name in consonance to its new mailing use. Somebody call it “ensaïmada” as it seems the Mallorca cake.
In Barcelona, an industrial and services district had to be modified to residential. But some other people decided to live in lofts in an old industrial becoming “services”.
According to the catalan law, every place must have its planning code. The solution for Barcelona and its metropolitan area was not using planning codes 2a or 22a (for industries or for housing) but 22@, this was said, places where residential housing is shared with some modern industries (design, software, media).
Now we have other “@” places. 7a and 7b are classed as “libraries” or “sports places”, but now we have “7@” which combines it to the wifi and multimedia public services… One of the districts is called “The 22@ district” now !

Comment posted by Keith Houston on July 26, 2011 at 3:44 pm

Comment posted by Fernando C on July 26, 2011 at 1:16 pm

Not a common usage but I use it often communicating with family and friends in Spain as a gender shorthand, i.e. “herman@s” meaning “hermanos” and “hermanas”. This is so as the “at” symbol seems a combination of an “a” and an “o”. I am not a linguist -as a matter of fact, I suck at my own language grammar. Hope this does not spoil second part. Enjoyed reading this.

Comment posted by C on August 1, 2011 at 7:22 pm

Comment posted by Steve Russell on July 26, 2011 at 2:51 pm

I think it’s worth mentioning the reason why teletypes were ubiquitious before computers and networking came along: they were invented to send point-to-point messages such as telegrams over the phone network.
The apparatus on the left side of the pictured ASR-33 teletype is a paper tape reader and punch. An operator could type in a message with the keystrokes (and corrections) recorded to a piece of paper tape. The message would then be sent by a transmitter over the phone network to a remote post office and appear as a paper tape at the other end. That tape would be used to print the telegram for delivery to the recipient.

So by the time computers came along in the 50s and 60s, teletypes were everywhere.

Speaking of the 60s, the ASCII character encoding inherited two control characters from the use of paper tape. The NUL character (ordinal zero) is what you get when you read a blank segment of tape (no holes). Erroneous characters where deleted by punching out all the holes, which is why the DEL character is 127 (111 1111 in binary).

And, of course, the memory of teletypes lives on even today in Unix/Linux systems, where “tty” pops up in many situations.

Comment posted by Bill Riddell on July 26, 2011 at 5:43 pm

I really enjoyed all of the “old timer” interaction thanks for the memories. It’s always nice to see how various computer features have evolved. Having held one of those “high priesthood” batch processing positions (IBM 704, batch processing at Northrop in the mid to late 1950’s) and then to programming other varieties of computers at other employers using punch cards, paper tape, CRT terminals and even the ARS33. In those days “binary was king” and the @ symbol was just that, as it hadn’t been used as much more than an accounting symbol (in the US) but now the @ symbol has now created a world of it’s own. AT&T was such a monopoly and massive hurdle interactive data communications was stiffled, so thanks to their breakup telecom could finally advance and now the @ symbol is the “traffic cop/high priest” of the internet.

Comment posted by Keith Houston on July 26, 2011 at 6:13 pm

Hi Bill,

Thanks for the comment. I work as a programmer myself, and it’s great to hear that an “old timer” enjoyed the article and the comments.

I was sufficiently intrigued by some of the descriptions I read of batch programming and mainframe operation that I found an online manual for one of IBM’s modern mainframes (a zSeries, if I remember rightly), and it was like peering into the past: job control languages, batch programming and the lot.

Anyway, thanks again for the comment, and I hope you’ll continue reading some of the other articles!

Comment posted by Keith Houston on July 26, 2011 at 6:16 pm

Comment posted by Liam Caffrey on July 27, 2011 at 1:09 pm

Fascinating article, I used to have an email address with ‘%’ in my name “Liam=Caffrey%Suppliers%HQIM=Mun@bangate.compaq.com” (How about that for an email address – still amazing to find it in postings after 17 years). I never realised it was a bridge between UUCP and SMTP. Banyan Vines was in the mix somewhere as well.
Another interesting thing about the NUL and DEL character and their origins from the TTY world… I was once importing some data into a database and somehow, somebody had managed to get a DEL character into a value in a field through a windowed interface (possibly some 4GL) using VT100. I don’t think it is possible to do that on a modern keyboard (apart from mapping perhaps).

Comment posted by John Cowan on August 1, 2011 at 3:38 am

Yes, that meant your Vines identity was Liam Caffrey@Suppliers@HQIM Mun (individual@group@organization), and the @s were converted to %s and the spaces to =s for Internet compatibility. UUCP probably wasn’t involved.

Comment posted by Jed Donnelley on July 27, 2011 at 4:15 pm

The PDP-10s that Tenex ran on were not considered “minicomputers”. They cost something like $100,000 and time on them was rather expensive. They could more likely be considered a “mainframe”. They occupied a considerable portion of a room, e.g.:

Comment posted by Keith Houston on July 27, 2011 at 4:45 pm

Comment posted by Solo Owl on March 18, 2014 at 12:30 am

Oh, I dunno about that. My introduction to computing was at a site that had a “mainframe” (IBM 370) and a “minicomputer” (DECsystem 20). You are right insofar as, from a hardware perspective, the chief difference was size. However, the DEC’s operating system was designed from the ground up as a interactive time-sharing system; time-sharing was grafted onto the IBM, which still had its priesthood.

Comment posted by Rob Chant on July 30, 2011 at 12:51 pm

Everyone has been commenting on the history and computing aspects of your excellent post, but, as the primary subject of your blog is typography, I feel the need to indulge in some extreme pedantry and mention a typographic mistake you have made:

‘Taint so

In your quote from Ray Tomlinson, that apostrophe should be the other way around. What you actually have there is an opening quote mark.

Comment posted by Rob Chant on August 3, 2011 at 11:15 pm

No probs! Poor typography on the web is one of my pet peeves, although I wouldn’t normally point to this kind of thing in a comment… but if there ever was a place for it, this is it! I’ve put a lot of attention into this in my own web platform; it’s not hard to do at all (with maybe one or two exceptions), it’s just that most developers don’t think about it.

Comment posted by Keith Houston on October 10, 2011 at 7:33 pm

Comment posted by Michael on August 15, 2011 at 11:55 am

Great article! Just one minor ‘nitpik’:
The term ‘Internet’ is usually capitalized, denoting the global Internet network, while the uncapitalized term ‘internet’ refers to any localized, private internal ‘inter-network’ such as that in a company or university, or even home internetwork. In the past, most technical writers were aware of this slight difference. However, today many readers/authors have lost sight of the significance of these differences.
Can’t wait to see Part II of your article.

Comment posted by Richard Gilbert on December 3, 2014 at 10:57 pm

Comment posted by Dannie on November 5, 2018 at 3:32 pm

The first line of part 2 is surely wrong. @ must (in English) simply have been a long-established shorthand for greengrocers (and the like) to chalk up ‘at each’. For example: apples @ 1/- (now 5p). The clue is in the shape – an ‘a’ at the heart of an ‘e’. There would be no material saving if it were shorthand for ‘at’ alone.

So saying ‘at’ is not enough, and ‘@ each’ is tautology.

Once again the computer world misuses language for its own ends. I wonder too if it’s misappropriation was helped by it being an underused key on the standard keyboard.

Comment posted by Keith Houston on November 5, 2018 at 5:07 pm

Hi Dannie — you have a point about the use of the word “each”, and I’ve tweaked the wording in part 2 to reflect this.

I wouldn’t say that the computer world has misappropriated the @-symbol. I doubt we’d be seeing it very much at all these days had Ray Tomlinson not chosen it! Also, yes, he almost certainly selected the ‘@’ because it was underused at the time.