Generation Nintendo

01Apr

In the final months of World War II, when the United States was trying to burn out the will of a starving Japan via the most sustained campaign of aerial incendiary bombardment in history, a handful of obvious targets remained strangely untouched. Among those targets was Kyoto: population 1 million plus, founded in the year 793, capital of the nation and home of the Emperor for most of the intervening centuries, home to more national shrines and other historic sites than any other city in Japan, world famous for its silk and cloisonné. If a single city can be said to embody the very soul of the Japanese people, it must be this one.

If the citizens of Kyoto believed that their city was being left untouched by the bombs raining down on the rest of the country out of respect for the special place it occupied in the Japanese psyche, they were partially correct. Yet the motivation behind their seeming good fortune was cold-blooded rather than humanitarian. American Air Force planners were indeed aware of Kyoto’s symbolic importance, but they hardly saw that importance as grounds for sparing the city. Far from it. Kyoto was being reserved as a target for a special new weapon, one which was referred to only obliquely in Air Force internal memoranda as “the gadget.” Today we know the gadget as the atomic bomb. Entirely destroying Kyoto with one bomb would deliver a shock to the rest of Japan unequaled by the destruction of any other possible target: “From the psychological point of view there is the advantage that Kyoto is an intellectual center for Japan and the people there are more apt to appreciate the significance of such a weapon as the gadget.” Kyoto must be left untouched while the gadget was made ready for service so that mission planners and scientists could properly evaluate the bomb’s effect on an undamaged clean slate of a target.

Hundreds of thousands of Kyoto residents would wind up owing their lives to Henry L. Stimson, a humane man tortured daily by the orders he had to issue as the American Secretary of War; never was there a Secretary of War who hated war more. In response to Stimson’s demand after the successful first test of the gadget in New Mexico, General Leslie Groves, head of the Manhattan Project, reluctantly presented the Air Force’s list of planned targets to him, with Kyoto at the top. Stimson was horrified. Citing the proposed destruction of Kyoto as an unforgivable act from which Japan would never recover, Stimson, 77 years old and in poor health, faced down virtually the entire entrenched bureaucracy of the American military to demand that the first atomic bomb to be used in anger be dropped somewhere, anywhere else: “This is one time I’m going to be the final deciding authority. Nobody’s going to tell me what to do on this.” His stubborn stance resulted at last in Kyoto being stricken from the list by grumbling generals who would have been perfectly happy if its destruction really had been a death blow to the culture it symbolized, thank you very much. Of course, in saving hundreds of thousands of Kyoto residents Stimson was also consigning to death hundreds of thousands of others in Hiroshima. Such are the wages of war.

The decision to spare Kyoto had another unintended consequence, one which may seem trivial — even disrespectful — to mention in parallel with such immense tolls in human lives saved and lost, but one which in its own way illustrates the interconnectness of all things. Hidden away within Kyoto’s blissfully undamaged warren of ancient streets was a little family-owned company called Nintendo, maker of ornate playing cards and other games and collectibles. Absolutely dedicated to the war effort, as all good Japanese were expected to be at the time, they had lately taken to giving their products jingoist themes, such as a backgammon board illustrated by cartoon animals dressed up as soldiers, with Japanese flags flying proudly above them and British and American flags lying crumpled in the dust at their feet.

More than four decades later, Stimson’s determination to spare Kyoto and with it Nintendo boomeranged back on his country in a way that no one could have seen coming. Many contemporary commentators, conditioned by the Reagan Revolution to cast all things in terms of nationalism and patriotism, saw in the arrival of Nintendo on American shores the opening of the latest front in a new war, economic rather than military this time, between the United States and Japan. And this time it seemed that Japan was winning the war handily. They had come for our steel, and we had done nothing. They had come for our auto industry, and we had done nothing. They had come for our televisions and stereos, and we had done nothing. Now they were coming for our videogame consoles. How long would it be until the PC industry, arguably the biggest economic success story of the 1980s, was threatened as well?

Given the subject of this article, I should take a moment to clarify right now that this blog has not been and will never become a history of console-based videogames. This blog is rather a history of computer games, a culture possessed of plenty of interconnections and collisions with the larger, more mainstream culture of the consoles, but one which has nevertheless remained largely its own thing ever since the first popular videogame console and the first three pre-assembled PCs were all launched during the single fecund year of 1977. In addition to reasons of pure personal preference, I justify this focus by noting that a fair number of people are doing great, rigorous history in the realm of videogames, while the realm of computer games has been comparatively neglected.

Still, we can’t really understand the history of computer games without reckoning with those aforementioned interconnections and collisions with the world of the consoles. And one of the biggest and most obvious collisions of all was that crazy time at the tail end of the 1980s when Nintendo arrived to sweep the rug out from under a computer-game industry which had spent the last few years convinced that it was destined to become the next great movement in mainstream American entertainment — i.e., destined to hold exactly the position that this Japanese upstart had just swept in and taken over with breathtaking speed. Small wonder that coded allusions to the dark days of World War II, accompanied by thinly veiled (or blatantly unveiled) racism, became the order of the day in many sectors of American culture, industry, and government alike. Meanwhile the bewildered computer-game executives were trying to figure out what the hell had just hit them and what they should do about it. Let’s join them now in asking the first of those questions.

Hiroshi Yamauchi

The history of the company known as Nintendo — the name can be very roughly translated as an admonition to work hard but also to accept that one’s ultimate success is in the hands of greater powers — dates all the way back to 1889, when it was founded by Fusajiro Yamauchi as a maker of intricately painted playing cards, known as “hanafuda” in Japanese. Nintendo managed to survive and grow modestly amid many changes in Japanese life over the course of the next half-century and beyond. The company’s modern history, however, begins in 1949, when Hiroshi Yamauchi, latest scion of the family-owned business, took over as president. Far more ambitious than his forebears, this latest Yamauchi was inspired by the entrepreneurial ferment of the rebuilding postwar Japan to expand Nintendo beyond playing cards and collectibles. The results of his efforts were decidedly mixed in the early years. Among his less successful initiatives were a line of instant-rice meals — a sort of ricey Ramen Noodles before Ramen Noodles were cool — and a chain of “love motels” offering busy executives the convenience of paying for their trysts by the hour. (Ironic as they might seem in light of Nintendo’s later rigorously enforced family-friendly image, at the time the love motels seemed to everyone around him a natural innovation for Yamauchi to have dreamed up; he was a notorious philanderer.) More successful, for a while, was a Nintendo taxi service. Yet even it was hardly a world-beater. Throughout the first two decades of Yamauchi’s lengthy reign he continued to cast restlessly about for the Big One, the idea that would finally take Nintendo to the next level.

In 1969, he made a big step in the direction of finding his company’s life’s purpose when he founded a new division called simply “Toys.” Employing a number of young gadget freaks as inventors, Toys began to churn out a series of strange contraptions straight out of Rube Goldberg, such as the Ultra Hand, a scissor-like reach extender that was more whimsical than practical; the Ultra Machine, an indoor mechanical baseball pitcher; and the Ultra Scope, a periscope for peeking around corners and over fences. (Parents were not terribly fond of this last one in particular.) All were quite successful, opening at last the sustainable new business front for Nintendo that Yamauchi had been dreaming of for so long.

With electronic components getting smaller and cheaper by the year, Nintendo’s innovative toys inevitably began to take on more and more of an electronic character as time wore on. The first big success in the realm of electronic gadgets was something called the Nintendo Beam Gun, which combined a light gun with a set of targets equipped with the appropriate photoelectric sensors; more than 1 million of them were sold. Nintendo built on the Beam Gun’s success with a chain of Laser Clay Ranges — think “clay pigeons” — that spread across Japan during the mid-1970s, re-purposed bowling alleys where patrons could engage in gunfights with cowboys and “homicidal maniacs” projected onto the far wall.

With Atari now going strong in the United States, videogames were a natural next step for Nintendo. They first made a series of Color TV Games, each a home videogame capable of playing a few variants of a single simple game when hooked up to the family television set; they sold at least 2.5 million of them in the late 1970s. The Nintendo Game & Watch, a whole line of handheld gadgets capable of playing a single game each, did even better; Nintendo is estimated to have sold over 40 million of them during the 1980s. Meanwhile they were also moving into the standup arcade; Donkey Kong, released in 1981, became a worldwide smash, introducing the Nintendo name to many in the United States for the first time. The designer of that cute, colorful, relatively non-violent game, a virtual blueprint for the eventual Nintendo aesthetic as a whole, was one Shigeru Miyamoto. He would become not only Nintendo’s own most famous designer and public figure, but the most famous Japanese videogame designer of all time, full stop. The protagonist of Miyamoto’s Donkey Kong, a little leaping Italian plumber named Mario, was also destined for greatness as one of if not the most famous videogame characters of all time (his only serious rival is likely Pac-Man, another contemporaneous Japanese creation).

All of this success, however, was only laying the groundwork for Nintendo’s masterstroke. Moving on from the single-game units that had so far been Nintendo’s sole output, Yamauchi tasked his engineers with creating a proper videogame console capable of playing many games that could be sold separately in the form of cartridges, just like the Atari VCS. The device they came up with was hardly state of the art even at the time of its debut. It was built around a clone of the venerable old 8-bit MOS 6502, the same chip found in the Atari VCS as well as American home computers like the Apple II and Commodore 64, with those circuits that were protected by patents excised. It offered graphics a little better than the likes of the 64, sound a little worse. The new machine was being readied at seemingly the worst possible time: just as the Great Videogame Crash was underway in the United States, and just as the worldwide conventional wisdom was saying that home computers were the future, videogame consoles a brief-lived fad of the past. Yet Nintendo freely, even gleefully defied the conventional wisdom. The Nintendo Family Computer (“Famicom”) was deliberately designed to be as non-computer-like as possible. Instead it was patterned after Nintendo’s successful toys and gadgets — all bright, garish plastic, with as few switches and plugs as possible, certainly with nothing as complicated as a keyboard or disk drive. It looked like a toy because Nintendo designed it to look like a toy.

The Nintendo Famicom

Yamauchi realized that a successful videogame console was at least as much a question of perception — i.e., of marketing — as it was of technology. In the imploding Atari, he had the one great counterexample he needed, a perfect model of what not to do. Atari’s biggest sin in Yamauchi’s eyes had been to fail to properly lock down the VCS. It had never occurred to them that third parties could start making games for “their” machine, until Activision started doing just that in 1980, to be followed by hundreds more. Not only had all of those third-party cartridges cost Atari hundreds of millions in the games of their own that they didn’t sell and the potential licensing fees that they didn’t collect, they had also gravely damaged the image of their platform: many or most Atari VCS games were just plain bad, and some were in devastatingly terrible taste to boot. The public at large, Yamauchi realized, didn’t parse fine distinctions between a game console and the games it played. He was determined not to lose control of his brand as Atari had done theirs.

For better and for worse, that determination led to Nintendo becoming the first of the great walled gardens in consumer software. The “better” from the standpoint of consumers was a measure of quality control, an assurance that any game they bought for their console would be a pretty good, polished, playable game. And from the standpoint of Yamauchi the “better” was of course that Nintendo got a cut of every single one of those games’ earnings, enough to let him think of the console itself as little more than a loss leader for the real business of making and licensing cartridges: “Forgo the big profits on the hardware because it is really just a tool to sell software. That is where we shall make our money.” The “worse” was far less diversity in theme, content, and mechanics, and a complete void of games willing to actually say almost anything at all about the world, lest they say something that some potential customer somewhere might possibly construe as offensive. The result would be an infantilization of the nascent medium in the eyes of mainstream consumers, an infantilization from which it has arguably never entirely escaped.

Whatever the reservations of curmudgeons like me, however, the walled-garden model of software distribution proved successful even beyond Yamauchi’s wildest dreams. After releasing their new console to Japanese consumers on July 15, 1983, Nintendo sold more than 2.5 million of them in the first eighteen months alone. Sales only increased as the years went by, even as the hardware continued to grow more and more technically obsolete. Consumers didn’t care about that. They cared about all those cute, colorful, addictive games, some produced by an ever-widening circle of outside licensees, others — including many or most of the best and best-remembered — by Nintendo’s own crack in-house development team, with that indefatigable fount of creativity named Shigeru Miyamoto leading the way. Just as Yamauchi had predicted, the real money in the Famicom was in the software that was sold for it.

Minoru Arakawa

With the Famicom a huge success in Japan, there now beckoned that ultimate market for any ambitious up-and-comer: the United States. Yamauchi had already set up a subsidiary there called Nintendo of America back in 1980, under the stewardship of his son-in-law Minoru Arakawa. Concerns about nepotism aside — no matter how big it got, Nintendo would always remain the Yamauchi family business — Arakawa was ideal for the job: an MIT-educated fluent English-speaker who had traveled extensively around the country and grown to understand and love its people and their way of life. Under his stewardship, Nintendo of America did very well in the early years on the back of Donkey Kong and other standup-arcade games.

Yet Nintendo as a whole hesitated for quite some time at the prospect of introducing the Famicom to North America. When Arakawa canvased toy stores, the hostility he encountered to the very idea of another videogame console was palpable. Atari had damaged or destroyed many a business and many a life on the way down, and few drew much of a distinction between Atari and the videogame market as a whole. According to one executive, “it would be easier to sell Popsicles in the Arctic” than to convince the toy stores to take a flyer on another console.

But Arakawa, working in tandem with two American executive recruits who would become known as “the two Howards” — Howard Lincoln and Howard Philips — wouldn’t let go of the idea. Responding to focus-group surveys that said the Japanese Famicom was too toy-like and too, well, foreign-looking to succeed in the United States, he got Nintendo’s engineers to redesign the externals to be less bulbous, less garish, and less shiny. He also gave the Famicom a new, less cutsy name: the Nintendo Entertainment System, or NES. The only significant technical update Nintendo made for North America was a new state-of-the-art handshaking system for making sure that every cartridge was a legitimate, licensed Nintendo game; black-market cartridges duplicated by tiny companies who hoped to fly under the radar of Nintendo’s stringent licensing regime had become a real problem on the Famicom. Tellingly, the lockout system was by far the most technically advanced aspect of the NES.

The Nintendo Entertainment System

The new NES made its public debut at last at the Summer Consumer Electronics Show in June of 1985. Few in the home-computer trade press — the videogame trade press didn’t really exist anymore — paid it any real attention. The big news of the show was rather the new Jack Tramiel-led Atari’s 16-bit ST computer. Computer Gaming World was typical, mentioning the NES only as a passing bit of trivia at the end of a long CES feature article: “Nintendo even offered an entirely new game system.” Clearly Arakawa and company had an uphill climb before them.

They deliberately started small. They would sell the NES first in New York City only — chosen because Arakawa considered it the most cynical and challenging place to market a new gadget in the country, and, as the old song says, “if you can make it there you can make it anywhere.” Starting with a warehouse full of the first 100,000 NESs to arrive from Japan and a $50 million war chest, Arakawa and the two Howards personally visited virtually every toy and electronics store in the five boroughs to press flesh and demonstrate the NES to skeptical managers and proprietors — and (hopefully) to take orders when they were finished. Meanwhile Nintendo blitzed the airwaves with advertising. They managed to sell 50,000 NESs in New York alone that Christmas season — not bad for an unknown gadget in a field that everyone, from the most rarefied pundit to the most ordinary Joe or Jane on the street, considered to be yesterday’s fad.

From that promising start they steadily expanded: first to that other taste-maker capital Los Angeles, then to Chicago, to San Francisco, to Dallas and Houston, and finally nationwide. Sales hit the magic 1 million mark well before the end of 1986. Cheap and cheerful and effortless in its lack of fiddly disk drives and keyboards, the NES was selling by that point as well as the Commodore 64, and far better than any other home computer. In the NES’s second year on the market it eclipsed them all to such an extent as to make continued comparison almost pointless: 3 million NESs were sold during those twelve months alone. And, astonishingly, it was still just getting started. During 1988, 7 million NESs were sold, to go with 33 million cartridges, each of which represented yet more profit for Nintendo. Lifetime NES sales topped 30 million in 1990, by which time one out of every three American homes could boast one of these unassuming gray boxes perched underneath the television. Total NES and Famicom lifetime sales reached a staggering 75 million in 1992; as many Nintendos were by then in the world as all PCs, whether found in homes or businesses or schools, combined. Even the Atari VCS in the heyday of the first videogame fad had never been able to boast of numbers like this.

Because Nintendo had come into the console market when it was universally considered dead, they had been able to reinvent it entirely in their own image. Just as “Atari” had once been a synonym for videogames in general, now “Nintendo” threatened to become the same for a new generation of players. Savvy about branding and marketing in a way that Atari had never quite managed to be, Nintendo felt compelled to actively push against this trend by aggressively protecting and limiting the use of their trademarks; they didn’t want people buying a new “Nintendo” that happened to have the name of Sega, Sony, or 3DO stamped on its case.

Nintendo’s penetration of the North American market could (and doubtless has) serve as the basis of an MBA course in marketing and brand-building. Starting from the less than nothing of a dead industry replete with consumer ill-will, coming from a foreign nation that was viewed with fear and mistrust by many Americans, Nintendo of America built one of the largest and most insanely loyal customer bases the American economy has ever known. They did it by tying their own brand to brands their target demographic was known to already love, like Pepsi and McDonald’s. They did it by building Nintendo stores within stores in major chains from Macy’s to Toys “R” Us, where kids could browse and play under the benevolent gaze of Mario while their parents shopped. (By 1991, Nintendo alone represented 20 percent of Toys “R” Us’s total revenues, and seven of their ten best-selling single products.) They did it by building a massive mailing list from the warranty cards that their young customers sent in, then using contests and giveaways to make every single one of them feel like a valued member of the new Generation Nintendo. They did it by publishing a glossy magazine, Nintendo Power, full of hints and tips on the latest games and all the latest news on what was coming next from Nintendo (and nothing on what was coming from their competitors). They did it by setting up a hotline of “Nintendo Game Counselors,” hundreds of them working at any one time to answer youngsters’ questions about how to get through this tricky level or kill that monster. They did it by relentlessly data-mining to find out what their customers liked about their games and what they didn’t, and crafting new releases to hit as many players as possible precisely in their sweet spots. They did it by spending up to $5 million on a single 30-second television commercial, four or five times the typical going rate, making the new commercials for a new Nintendo game an event in themselves. They did it by making sure that Mario and Zelda and their other iconic characters were everywhere, from televisions shows to records, from lunch boxes to bed sheets. And they did it by never worrying their customers with the sorts of metrics that the home-computer makers loved: kilobytes and megabytes and colors and resolutions and clock speeds and bit counts. The NES was so thoroughly locked down that it was years before there was any published information available at all on what was really contained within those ubiquitous gray plastic shells.

If it can all sound a little soulless when laid out like that, well, few in business would argue with the end results. Nintendo seemed to be becoming more American than most Americana. “A boy between 8 and 15 without a Nintendo is like a boy without a baseball glove,” wrote Hobby World magazine in 1988. In 1990 a survey found Mario to be more recognizable to American children than that most American of all cartoon icons — Mickey Mouse.

And where did all of this leave the established American computer-game industry? That was a question that plenty in said industry itself were asking with ever-increasing frustration and even desperation. Total sales of computer games published on all platforms in 1989 totaled about $230 million; total sales for Nintendo cartridges, $1.5 billion. It wasn’t supposed to have gone like this. No one in computer games had seen anything like Nintendo coming. They, the computer-game industry, were supposed to have been the next big wave in American home entertainment — a chicken in every pot and a home computer in every living room. Instead this Japanese upstart had stolen their thunder to such an extent as to render their entire industry an afterthought, a veritable non-entity in the eyes of most financial analysts and venture capitalists. Just to add insult to the injury, they were being smothered by thoroughly obsolete 8-bit technology when they could offer consumers audiovisual feasts played on Amigas and Atari STs and IBM PS/2s with VGA graphics. A computer-game designer with Electronic Arts saw unnerving parallels between his own industry and another American industry that had been devastated by Japan in the previous decade:

The best companies and the best programmers were making computer games. But the Nintendo player didn’t care about the sophisticated leaps we were making on computers — the frame rate of the images or incredible sound. They just wanted fun. It was like we were making gas guzzlers and the Japanese were making subcompacts.

At street level the situation didn’t look much better. Fred D’Ignazio, a columnist for Compute!’s Gazette, shares a typical story:

My kids and I used to play games on our home computer — games like Epyx’s The Legend of Blacksilver, SSI’s Questron II, EA’s Jordan vs. Bird: One-on-One, Gamestar’s Take Down, Arcadia’s Aaargh!, and, of course gobs and gobs of good educational games.

Then the Nintendo landed, and things haven’t been the same since. The Nintendo runs day and night. (We’re not even allowed to shut off the machine when we go to bed because there’s always a game in progress — and there’s no disk drive to back it up.) Meanwhile, I don’t think our little home computer has been fired up in weeks.

The computer that was most damaged by Nintendo’s invasion of North America was undoubtedly the Commodore 64. It was very cheap in computer terms, but once you added in the cost of the essential disk drive it was nowhere near as cheap as the NES. And it was still a computer, even if a computer that had long been used primarily for playing games. You had to type in arcane commands to get a game started, had to wait for the game to load, often had to shuffle disks in and out of the drive and do a lot more waiting as you actually played. A Compute!’s Gazette reader shares the story of her attempt to introduce her Nintendo-loving eight-year-old nephew to the joys of Commodore 64 gaming:

As he looked through my 64 software to pick out a game, I started to give directions on how to handle the software and disk drive. Before I could finish he said, “I just want to use a cartridge and start playing.” After about fifteen minutes into a game he said, “This is great, but how come it takes so long to start the game again and why do I have to keep turning the disk over and over all the time?” Shortly after, he started complaining that his hand was too small for the joystick. He tried three other joysticks, but he either had the same problem or the joystick didn’t have the dexterity needed to play the game. He then said, “I wish I could use my Nintendo controls on your Commodore.” Soon after, he quit and went right to his Nintendo.

The Commodore 64 was in a very difficult position, squeezed from below by Nintendo and squeezed from above by the Amiga and Atari ST and, most of all, by ever more consumer-friendly MS-DOS-based machines from companies like Tandy, which were beginning to sport hard disks, crisp VGA graphics, sound cards, and mice. There wasn’t much that Commodore’s aged little breadbox could offer in response to a feature set like that. In the battle versus Nintendo for the low end, meanwhile, all of the immense force of playground public opinion was arrayed against the Commodore 64. The 64 was clunky and slow and ugly. It was the machine your big brother used to play games on, the one your parents kept pushing you toward to learn programming or to play educational (blech!) games. The Nintendo was the machine that all your friends played on — the same friends who would look on you as a freak if you tried to get them to play a computer game with you.

If you think that hardcore Commodore 64 users accepted this changing world order peacefully, you don’t have much experience with the fanatic platform loyalties of the 1980s. Their heated opinions on the 64’s Nintendo crisis spilled much ink on the pages of the remaining 64-centric magazines, moving through spasms of denial (“If Nintendo has the ability to keep its users captured, why do my two nephews keep pestering me to let them play the games that I have for my 64?”), advice (“Commodore could bring out some new peripherals like a light gun to play shooting games or a keyboard to make use of the superior sound of the 64”), and justification (“This letter was typed on a 64. Let’s see any Nintendo do that!”). When all else failed, there was always good-old-fashioned name-calling: “The word-processing capability of the 64 is a pointless feature to most Ninnies, since the majority of them don’t seem to be able to read and write anyway. Most of the Ninny chic was built on the fact that a baboon could operate it.”

None of this raging against the dying of the light could make any difference. The Commodore 64 went into an undeniable decline in 1988. That decline became a free fall in 1989, and in 1990 the 64 was effectively declared dead by the American software industry, with virtually every publisher terminating support. The other great 8-bit survivor, the Apple II, hung on a little longer thanks to an entrenched user base in schools and small businesses, but when Apple finally discontinued all production of the line in 1993 the news was greeted by most publishers with a shrug: “I didn’t know those old things were still being made!”

The computer-game publishers’ reactions to Nintendo were complicated, ofttimes uncertain, occasionally downright contradictory. With Nintendo rapidly taking over what used to be the low end of the computer-game market, many publishers felt emboldened to refocus their energies on the still slowly growing higher end, particularly on all those new consumer-oriented clones from Tandy and others. Plenty of publishers, it must be said, weren’t really all that sad to see the 64 go. The platform had always been tricky to develop for, and its parent company was still widely loathed for heaps of very good reasons; everyone in the industry seemed to have at least one Commodore horror story to tell. Many had come to see the 64 during its years of dominance as an albatross holding back ambitions that would have been realizable on the bigger, more powerful platforms. Now they were at last free to pursue those grander schemes.

At the same time, though, the Commodore 64 had been their cash cow for years, and there remained the question of whether and how soon all those bigger machines would make up for its loss. Certainly they failed resoundingly to take up the slack in 1989, a bad year for the computer-game industry and a great one for Nintendo.

As unhappy as the majority of industry old-timers remained with the Nintendo-dominated state of affairs in digital games in general, that $1.5 billion in annual cartridge revenue and massive mainstream penetration was awfully tempting. As early as 1988, it seemed that just about everyone was discussing adapting their computer games to the NES, and a fair number were swallowing their pride to approach Nintendo with hat in hand, asking for a coveted license to make NES games. In addition to the sheer size of the Nintendo market, it also had the advantage that piracy, which many in the computer-game industry continued to believe was costing them at least half of the revenues they would otherwise be enjoying, was nonexistent there thanks to those uncopyable cartridges and the NES’s elaborate lockout system.

Activision,1 who had enjoyed their greatest success by far in the old glory days of the Atari VCS, jumped onto the Nintendo bandwagon with perhaps the most enthusiasm of all. Activision’s head, the supremely unsentimental Bruce Davis, often sounded as if he would be perfectly happy to abandon computers altogether, to make Activision exclusively a publisher of videogame cartridges again: “If hardware companies are designing a machine for one purpose, they will do a better job than on a multi-function machine.”

But it’s the more unlikely NES converts that provide the best evidence of just how far Nintendo had come and just how much pressure the traditional computer-game industry was feeling. The NES began to get quite a number of ports of computer-game fare that no one would ever have imagined trying to put on a machine like this just a year or two earlier. Origin, for instance, put out NES versions of Ultima III and Ultima IV, and Lucasfilm Games ported Maniac Mansion. (See Douglas Crockford’s “The Expurgation of Maniac Mansion“ for a description of the hoops publishers like Lucasfilm had to jump through to meet Nintendo’s stringent content restrictions.) Even SSI, whose traditional stock-in-trade of turn-based, cerebral, complicated strategy games was about as far from the whimsy of Mario and Zelda as you could get, moved Pool of Radiance over to the NES. Computer Gaming World, the journal of choice for those same cerebral strategy gamers, tried to rope in Mario fans with a new magazine-within-a-magazine they dubbed “Video Gaming World.”

Few of these initiatives bore all that much fruit. The publishers may have found a way to get their games onto the NES, but said games remained far from the sort of fare most Nintendo players were interested in; suffice to say that Nintendo never had to worry about any of these titles eclipsing Mario. Still, the fact that so many computer-game publishers were making such an effort shows how scary and uncertain Nintendo was making their world. Perhaps the most telling moment of all came when Trip Hawkins announced that Electronic Arts would be jumping into the console space as well. This was the same Trip Hawkins who had written a commitment to “stay with floppy-disk-based computers only” into Electronic Arts’s first business plan, who had preached the gospel of home computers as successors to videogame consoles as loudly and proudly as anyone in his industry. Now he and his company were singing a very different tune. Bing Gordon, Hawkin’s right-hand man at Electronic Arts, compared home computers to, of all unflattering things, steam engines. James Watt, the inventor of the steam engine, had imagined one in every home, with a bunch of assorted pulleys and gears to make it do different things. Instead modern homes had a bunch of more specialized machines: washing machines, food processors… and now Nintendos. Soon Hawkins would leave Electronic Arts to found 3DO, a company to make… you guessed it, a new videogame console.

Some, however, chose a more belligerent path than these can’t-beat’em joiners. Nintendo’s rigorous control of the NES’s walled garden rankled everyone in the older software industry; this just wasn’t how their business was done. They believed that Nintendo was guilty of restraint of trade, antitrust violations, you name it. Particularly enraging was Nintendo’s complete control of the manufacturing pipeline for NES cartridges. Leveraging those data-mining systems of theirs, more sophisticated than anyone had heretofore ever dreamed of, Nintendo made sure that the supply of new games was always slightly less than the demand for them, thereby creating a hype for each new title as a hot, desirable status symbol among the Nintendo Generation and, most of all, avoiding the glut of games piled up in warehouses — and, eventually, landfills — that had marked the Great Videogame Crash of 1983. But when American publishers saw their games produced in insufficient quantities to become the hits they believed they might otherwise have been, they cried foul. The Software Publishers Association served as the disgruntled voice of the American software industry as a whole in what became a full-scale public-relations war against Nintendo.

The SPA believes that Nintendo has, through its complete control and single-sourcing of cartridge manufacturing, engineered a shortage of Nintendo-compatible cartridges. Retailers, consumers, and independent software vendors have become frustrated by the unavailability of many titles during the holiday season, and believe that these shortages could be prevented by permitting software vendors to produce their own cartridges.

American publishers felt certain that Nintendo was playing favorites, favoring their own games and those of their favorite third-party publishers — generally the ones from Japan — by manipulating production numbers and manipulating the sentiments of Generation Nintendo through the coverage they gave (or didn’t give) each game in Nintendo Power. “If I pissed Nintendo off,” runs a typical complaint, “I would get less product. My games would get hit in Nintendo Power and they’d get low ratings.” And the most surefire way to piss Nintendo off, at least according to this complainer, was to release a game for the NES’s first serious competitor, the Sega Genesis console that entered the United States in 1989.

There was plenty of tinder already lying about the public sphere, just waiting to be ignited by such rhetoric. All of the concerns about videogames that had been voiced by parents, educators, and politicians during the heyday of Generation Atari were now being dusted off and applied to Generation Nintendo. Now, however, they were given additional force by Nintendo’s very foreignness. Plenty of Americans, many of whom had still not completely forgiven Japan for Pearl Harbor, saw a nefarious agenda behind it all, a fifth column of Mario-obsessed youngsters who might come to undermine the very nation. “Notice the way Super Mario is drawn,” wrote one in a letter to a magazine. “He has the eyes of someone who has been brainwashed.” Lurking just below the surface of such complaints, unstated but by no means unconveyed, were old attitudes toward the Japanese as shifty characters who could never be trusted to follow the rules, whether in war or business. It all came down to “cultural” differences, they muttered disingenuously: “There’s more of a sharing of the pie by American companies. In Japan, it’s different: winners win big and losers lose.”

Hoping to capitalize on the burgeoning anti-Nintendo sentiment, in December of 1988 Tengen Games, a spinoff of Atari Games (which was itself the successor to the standup-arcade portion of the original Atari’s business), sued Nintendo in federal court for antitrust violations and monopolistic practices: “The sole purpose of the lockout system is to lock out competition.” Having found a way to defeat the much-vaunted lockout system through a combination of industrial espionage, reverse engineering, and good old social engineering — this is one of the few occasions in Nintendo’s history where one might accuse them of having been naive — Tengen simultaneously launched a few of their own unauthorized games for the NES.

Nintendo’s counterattack against Tengen was massive and comprehensive. Not only did they launch the expected blizzard of legal actions, but they made it clear to all of the stores that handled their products that there would be grave consequences if they chose to sell the Tengen games as well. Such threats ironically represented a far more clear-cut antitrust violation than anything found in Tengen’s original suit. When Tengen got the court to order Nintendo to cease and desist from such behavior, Nintendo allegedly only became more subtle. “You know, we really like to support those who support Nintendo, and we’re not real happy that you’re carrying a Tengen product,” a rep might say. “By the way, why don’t we sit down and talk about product allocations for next quarter? How many Super Marios did you say you wanted?” “Since it was illegal, there were always excuses,” remembers one retailer. “The truck got lost, or the ship from Japan never arrived.”

Tengen was determined to try their case against Nintendo first and foremost in the court of American public opinion. “Who gave Nintendo the power to decide what software the American public can buy?” they asked. The New York Times, for one, agreed with them: “A verdict in favor of Nintendo would probably have a spillover effect into the personal-computer industry, where it could have a chilling effect on the free flow of ideas and innovations that have characterized that market since its inception.” An opportunistic Congressman named Dennis Eckart launched a high-profile crusade against Nintendo that led to lots of heated rhetoric amid Congressional hearings and the involvement of several state Attorneys General and the Federal Trade Commission. Jack Tramiel of the other Atari (the one currently making the Atari ST computer), who had always viewed lawsuits as healthy business competition by other means, piled on with a suit of his own, claiming that by monopolizing the market Nintendo was keeping his own company from getting good software for its machines. “Nintendo has demonstrated its disregard for free and fair competition in America,” said Jack’s son and anointed successor Sam Tramiel.

Yet the anti-Nintendo sentiment in the country didn’t ultimately do much to help either of the two Ataris’ legal cases; the courts proved willing to buck that rising tide. In a landmark ruling against Tengen in March of 1991, Judge Fern Smith stated that Nintendo had the right to “exclude others” from the NES if they so chose, thus providing the legal soil on which many more walled gardens would be tilled in the years to come. Similarly, the Tramiels’ suit against Nintendo was definitively rejected in 1992, after having cost their company a great deal of time, energy, and most of all money it could ill afford. The other various and multifarious investigations into Nintendo’s business, of which there were far too many to summarize here, resulted in a mixed bag of vindications and modest slaps on the wrist that did nothing to alter Nintendo’s overall trajectory. Perhaps the best argument against Nintendo as a monopoly was the arrival of the company’s first competitors in the console space, beginning with Sega, who proved that it actually was still possible to carve out a non-Nintendo place of one’s own in the game-console industry that Nintendo had so recently resurrected.

Nintendo, then, was here to stay, as were Sega and other competitors still to come. The computer-game industry would just have to accept that and reckon with it as best they could. In the end, the threat from Japan proved not quite as apocalyptic as it had seemed during the darkest days of 1989. In 1990 computers could start to boast of a modest new buzz of their own, thanks to the new so-called “multimedia PCs” and a bunch of new games that took advantage of their capabilities. Having ceded the low ground to the consoles, computers had retained the high ground, a loyal constituency of slightly older, more affluent gamers who still had plenty of room in their hearts for the sort of big, high-concept strategy, adventure, and CRPG games that weren’t all that realizable on the more limited consoles. The computer-game industry grew again already in 1990, and by a double-digit percentage at that. The vibrant jungle of PC gaming would continue to bloom in a thousand ways at once, some of them productive, some of them dead ends, some of them inspiring, some of them kind of repugnant. And through it all, the jungle of PC gaming would remain interesting in ways that, at least for this humble writer, the fussily manicured walled garden of Nintendo has never quite managed to be. But whichever mode of distribution you personally favored, one thing became clear as the 1980s gave way to the 1990s: neither Generation Nintendo nor the emerging Generation Wintel would be going anywhere anytime soon.

(Sources:The Making of the Atomic Bomb by Richard Rhodes; Game Over by David Sheff; Compute!’s Gazette of May 1988, March 1989, August 1989, September 1989, October 1989; Computer Gaming World of September/October 1985 and June 1988; Amazing Computing of January 1989; materials in the SSI and Brøderbund collections at the Strong Museum of Play.)

Activision changed their name to Mediagenic midstream in these events. Because I haven’t told the story behind that change yet, and in order to just generally avoid confusion, I simply refer to the company as “Activision” in this article. ↩

77 Responses to Generation Nintendo

My friends and I started getting Nintendos in about 1986/7 and they sat alongside our C64s for several years. Maybe my group of friends was unusual but we didn’t “replace” the C64 as much as augment it. As I’m sure you remember piracy was rampant for the C64 and non-existent for the NES. So, since we had very limited resources, we would ask for a Nintendo cartridge for a present because there were other ways to acquire C64 games.

In other words, I think the emergence of the NES actually incentivized piracy on the C64 for at least the people who owned both.

And thank you, from the bottom of my heart, for not making an April Fools joke.

One error (I think…tho my memory is fuzzy): It was Ultima III that was ported to the NES as simply “Ultima,” not IV.

I remember that article from Compute!’s Gazette and the ensuing letters columns. The “I just want to play” sentiment cannot be overstated. I remember showing a friend (who never owned a computer) some games on my C-128 in the late ’80s and when I had to type LOAD”*”,8,1 to boot up a program, he immediately said, “that’s stupid. Why do you have to learn and memorize all these codes just to play a game?”

I was flabbergasted at the time. “There’s not much to memorize, it’s just how you talk to the computer.” But the thing is, I grew up with a computer in the house since I was six; my oldest brother had gotten interested in home computing and our VIC-20 (later C-64, still later the 128) was used by everyone in the family for games and such. It was natural for me to input a LOAD command and have to specify the file name, device number and such…to someone who wasn’t raised with that, however, I can see how foreign it must have seemed (much like the worlds of hashtags and cryptocurrencies seems to me these days).

I often felt like the only kid in the world without an NES, but I loved my Commodore. The kinds of games I like – RPGs, strategy and war games, adventure games – just weren’t available on the cartridge systems. Nor were graphics and music programs, productivity software, etc. But boy did the NES do arcade games right…I remember playing Ikari Warriors at a friend’s house every weekend well into the night.

Just some unfocused rambling (Again). Thanks for the article.

And yes, thanks from me, as well, for not making this an April Fool post.

It seems there was an NES Ultima III, which I hadn’t realized. But there was also an Ultima IV. Some people will tell you that’s the one to play, since for obvious reasons it excises the text parser in conversations and with it the whole “guess the keyword” problem. Never played it myself.

The Ultima IV NES port was the first game in the series I ever experienced and the source of my oldest video game related memory.

As a younger brother, I spent a lot of time watching my older brother play games and annoying him with tips and comments. I figured out the visual cue for secret passageways and kept on telling my brother “walk through that wall! there’s something there!” As one might expect if one ever had an older brother, the response was along the lines of “pfft, shut up dummy.” Imagine my unseemly pride when brother finally decides to shut me up and walks — right — through — the wall.

The Ultima III and IV ports are fairly similar. They both look and feel a lot like Dragon Warrior. In addition to the dialogue being simplified, reagent mixing was removed (you still needed to have them), the party size was smaller, and food was removed.

You might find it particularly notable that the endgame was changed to remove all the quiz-game stuff you mentioned as an anticlimax.

Unfortunately, later console ports of Ultima games have much less to recommend them. V had a full VI-style isometric graphical overhaul, but the result was ugly and painfully slow. VI (on SNES) was hard to control and had a lot of its notable interactivity removed, and VII was turned into a bad Zelda knockoff. I played several of these as a kid and hated them even before I knew what I was missing.

The Theme from New York, New York (“If I can make it there, I’m gonna make it anywhere”) is not that old a song; it dates from the 70s. (I think we’re roughly the same age, and I remember being surprised at how recent it is.)

Ah, so that’s why Infocom released games like Shogun for the Apple II, and not for the C64. :)

In 1990 computers could start to boast of a modest new buzz of their own, thanks to the new so-called “multimedia PCs” and a bunch of new games that took advantage of their capabilities.

Admittedly I wasn’t in the US, but in Portugal, where consoles never really dominated the market until the first Playstation (the 80s here were ruled by the Spectrum, and later by the Amiga), but I think the PC’s resurgence may have started a little sooner (that is, before CD-ROMs — unless you mean “multimedia” as simply graphics and sound), with VGA-compatible cards and the Sound Blaster. For the first time, the PC was the most “powerful” system in terms of graphics and sound, after being humiliated for so many years by the C64, the ST, the Amiga, and, yes, the several consoles. Suddenly, it wasn’t the PC owners wishing they had an Amiga, it was the other way around.

I don’t know if the Commodore 64’s commercial decline was obvious enough yet for it to have been a major factor in Infocom’s decision not to support it with their final games. More important I think was that the Apple II was more capable as a platform for interactive fiction. It could be and generally was equipped with 128 K or more of memory by 1988, and supported 80-column text that could be mixed easily with bitmap graphics.

“Multimedia” became a big buzzword in 1990, jumped on by Microsoft, Apple, and even Commodore (who, having released the first multimedia PC back in 1985 but having been too clueless to realize it at the time, were a day late and a dollar short in their marketing as usual). While it was generally understood that CD-ROM was *coming*, it would be a couple of years yet before CD-ROM games and other applications became commonplace. In the meantime, it did indeed often refer to just very good (for the era) graphics and sound. A consortium working with the Software Publishers Association defined an official “Multimedia PC” standard in 1991: 80386 processor, 2 MB of RAM, 30 MB hard disk, VGA graphics, Soundblaster, and a CD-ROM drive.

This was an interesting piece, which somehow filled in a part of the picture I’d never thought about too much before: when it came to the fading of the “multi-systems era,” I had always sort of just assumed it was a matter of the relentless march of MS-DOS…

Anyway, while I’ve admitted before to having got through the 1980s somehow unaware there could be computers in homes that didn’t come from Radio Shack, even I couldn’t miss Nintendo Entertainment Systems starting to show up in other homes I visited. My family never had one, though; maybe there was just the understanding we had a perfectly good Color Computer with lots of discs with games on them (and with the “Coco 3,” some of them even started using more than the same four colours all the time). There was a bit of angst in the Color Computer magazine “The Rainbow” that seemed related to the NES, and the last games Radio Shack sold for the computer were all in “Program Pak” cartridges.

Your casual comment a few posts back about the Commodore 64 and Apple II markets both going soft in the same year did catch my attention when you made it: I’ve imbibed enough Apple II history to be aware of the bad feelings that developed at the end of the decade, when plenty of users were starting to write letters about how, even after the expulsion of Steve Jobs, Apple’s management had turned their faces to a valuable asset as if a fit of pique against (relatively) low-cost systems/the 6502-System Monitor-expansion slot gestalt/”Woz,” but I had supposed things had to have been different somehow with Commodore, for all that I have read the Compute’s Gazette issues you mentioned.

Speaking of Radio Shack, I remember being in a store just before one Christmas during the NES years. In thirty minutes I saw three different people come in asking whether Radio Shack sold Nintendo cartridges. I wonder who was more frustrated, the desperate parents seeking gifts for their children, or employees unable to sell the most-coveted toys on the market.

One problem with many fan historians is that they tend to mistake specific causes for broader historical trends, resulting in the “if only” syndrome. If only Infocom hadn’t squandered all their resources on Cornerstone, the commercial text adventure would still be alive and well. If only Apple had properly supported, promoted, and updated the Apple II line, it would still be going strong. If only Gary Gygax had had sole control of TSR, tabletop RPGs would be as popular now as they were doing the early 1980s. Etc. It’s natural for us as humans to be drawn to tragic narratives like this, but they seldom stand up to much scrutiny.

I could say that dismissing single causes is equally wrong. If Infocom hadn’t squandered all their resources on Cornerstone, commercial IF might have managed to morph into something different and better instead of dying with a whimper, and influence computer games for the better much more, much sooner. (Then again, the 1990s IF scene might not have been born in that case, and we’d be losing out now. History’s complicated.) And sure, likely the Amiga or the Apple ][ wouldn’t have survived even with proper, timely updates and support to the line. But had they been more influential for longer, they might have changed the entire computer industry for the better. (The utter domination of Wintel during all these decades was an unmitigated disaster, for too many reasons to list here: the promotion of mediocrity, making it possible for a single piece of malware to infect potentially 90% of computers…) Trends matter, sure… but trends are made of individual events like that, pushing each other forward like dominoes. Add or remove one, the whole pattern changes, even though none of its parts go anywhere. And the pattern toppling dominoes make is the whole point.

Noooo I don’t want to say goodbye to the 1980s yet!! Although I grew up in the 1980s with my C64, I don’t really have a true feeling of nostalgia for the games I played. But I get a nice dose of “borrowed nostalgia” (and history lesson) by reading the stories of all the games and game companies I didn’t experience back then. Say we’re not out of the 80s quite yet!

While loss-leaders eventually became a standard console model, it was not until much later, when Sony and Sega went to war with their PlayStation and Saturn consoles. Most earlier consoles (especially the ones from Nintendo, who continue to avoid the practice whenever possible to this day) were sold at a profit.

The Nintendo Seal Of Quality never had anything to do with the quality of the gameplay, only that the content passed NIntendo’s increasingly strict censorship (which not only excluded games like Custer’s Revenge for being made for the platform but kept many of the creepy games being made for Japanese PCs (which was already an extreme niche market) and was intended to avoid a reaction like the Satanic Panic) and that the cartridge would actually load. This was important because in the latter days of the Atari it was far from rare to get a cartridge that didn’t even work. The NES had plenty of really bad games.

I’d challenge the notion that the Ultima ports sold poorly because of a lack of resonance with the market – they are the textbook example of a badly handled port, with the entire keyboard’s worth of commands being shoved into an extremely cumbersome series of nested menus. In a fairly short time, the nascent jRPG genre (minus games like the Megami Tensi series or Sweet Home (which would never pass the NOA censorship test) or the second two Final Fantasy games (which would take so long to translate that they’s have been released after the SNES was already out and nobody would pay the extremely high price for an RPG cartridge) would arrive on American shores to massive success, and the games in question were easily better than the majority of what was being put on computers at the time (while we remember the Ultimas and the Gold Boxes and other highlights, anyone who follows the CRPG Addict will now be aware that these were islands rising high above a sea of mediocrity).

Finally, a note about how the NES was packaged is probably worth noting – the grey box redesign is deliberately made to let the console look like a VCR, and the original release package contained R.O.B. the robot as a pack in accessory. The goal was, of course, to avoid associations with the VCS and competitors, which used the top loading that all other consoles (including the second-generation NES) would adopt (for good reason – the NES method was a massive pain back in the day, and retro shops are constantly being sold “broken” consoles that can be fixed simply by opening up the case and conducting a minor realignment) and came with no accessories other than a controller.

Not that I don’t appreciate your insights — I do, very much so — but most of this is just a little farther down in the weeds than I preferred to get in the article, which is very much a “Nintendo through the eyes of the computer-game industry” piece rather than an exhaustive history.

On the subject of loss leaders, I did qualify that assertion with a “little more.” ;) By all accounts Nintendo’s profit margins were not very high on hardware, particularly early on. (As production costs inevitably decreased over time, this may have changed somewhat.)

While I’m sure that the NES had its share of underwhelming games, I think Nintendo’s quality control really does have to be understood in the context of the old Atari VCS market and the contemporaneous computer-game market. For instance, plenty of adventure-type games were being released on PCs that were so poorly designed as to be effectively insoluble. An interesting case study is the port of Faery Tale Adventure, one of these badly designed adventures, to the Sega Genesis; Sega very much followed Nintendo’s lead in curating their own walled garden. In the case of The Faery Tale Adventure, Sega insisted that a basic walkthrough be printed in the manual so players could have a fair chance. This is the level of basic quality control that players of console games could be assured of.

I am a little surprised that you mention the Ultima games in particular as poor ports. I’ve heard quite a bit of love expressed toward the NES Ultima IV in particular, although I don’t have any direct experience with it.

Ultima IV gets a fair bit of praise because of the simplified conversation method, but the games are extremely cumbersome to play compared to native games that already existed when the games came out (although they mostly hadn’t been brought over here yet.).

Loading up U3, it takes far longer to create a party (not helped by the fact that every class and race is reduced to a three character abbreviation to save space), your party moves so slowly you risk falling asleep just walking from one side of a town to another, and having to use a menu instead of just hitting a button that will react appropriately to whatever you’re facing (the way pretty much any other game would work) is incredibly annoying. If I’d bought it at the same time I was playing the C64 version, I’d consider myself robbed.

For an interesting specimen of the xenophobia in the American marketplace at this time, I present my transcriptions of “the Atarian” comic strips, from a house organ of the once-mighty Atari… struggling against the WWII-era caricatured “Ninja-Endo”: http://pixelpompeii.blogspot.ca/2015/07/atari-super-hero.html

Jimmy, great article, as ever! You write “Built around a licensed Japanese version of the venerable old 8-bit MOS 6502” and yet we know from visual6502 and visual 2A03 that the chip in the NES contains a layout-level copy of a 6502 with 5 transistors carefully excised. (See _On the Edge_ and also the 6502DecimalMode wiki page.) Would a licensed copy remove the exact circuits which are covered by patent? I believe not!

This situation is… really interesting. Now that I think about it, if the chip had been properly licensed Commodore would likely have easily earned enough from the NES to sustain them for years. No other device that used the 6502 came up to even half of the NES’s production numbers. It certainly doesn’t cast Nintendo in a very favorable light.

I think the key is “assuming they were aware of it.” The NES really was a black box for many years. Nintendo worked very hard to keep it that way — binding developers to unprecedented layers of NDAs, etc.

I suspect that if Jack Tramiel had still been in charge Commodore would have been *very* aggressive about investigating and then attacking, given Tramiel’s well-known fear and loathing of the Japanese and his fondness for lawsuits. The often nearly rudderless, confused, schizophrenic version of Commodore that existed after him though? Who knows what they were thinking. They may just have squandered a life preserver through sheer inattention.

Some interesting speculation here, including the idea that Nintendo’s famous secrecy about the details of the NES may have been partly implemented in the hope of covering up the details of its processor: http://metopal.com/2012/02/12/famicom-brain/.

The NES used a Ricoh 2A03 microprocessor, which was built around a 6502 core that had functions unnecessary for a game console removed and other functions, such as a dedicated circuit for the controllers, added. Ricoh had a valid second source license for the 6502, and there is no evidence that there was anything even slightly shady about the matter. Nintendo would later purchase the Ricoh 5A22 chip (a variant of the 65816, a direct descendant of the 6502 used in the Apple ][gs) for the Super Famicom/SNES long after the company had lost most of their NES era anti-competitive lawsuits. By this era, they wouldn’t have risked anything connected to something even a little dodgy.

At least a dozen major computers used the 6502 or a direct derivative, as did the Tamogatchi toys that were everywhere in the 90s, the 30,000,000 Atari 2600 consoles, among other uses.

The 6502 was so ubiquitous in that era that even the 61,000,000+ NES consoles were little more than a drop in the bucket. Even discounting the continuing use and manufacture of the chip (it is still used today in a lot of embedded systems), 6502 sales in the era probably topped half a billion.

Jimmy Maher

April 3, 2016 at 6:54 am

Do you happen to have a source you could point me to for this “second-source” license? David Sheff’s book, which remains the go-to history for the NES, is no help; it just says the chip is a 6502 and leaves it at that. The idea that it was licensed goes against the statements in Bagnall’s book. It could of course very well be that Bagnall’s book is incorrect; one of its problems is that it tells the story of Commodore almost entirely from the perspective of the engineers, who have an incomplete grasp at best of the bigger business picture.

One thing I have trouble understanding is the mechanics of 6502 licensing in general. Given that the 6502 was so ubiquitous, it seems that should have been a huge revenue stream for Commodore well into the 1990s, yet I never hear it discussed in that context.

anonymous

April 3, 2016 at 8:15 am

Jimmy,
Two other excellent sources for some of the obscure aspects of NES history are the recently-published book I Am Error, by Nathan Altice, another of the excellent Platform Studies books from MIT Press, and also Glitter Berri’s Game Translations, here: http://www.glitterberri.com/developer-interviews/how-the-famicom-was-born/ (said translations were actually commissioned by Altice).

Jimmy Maher

April 3, 2016 at 8:43 am

Thank you! I Am Error provides by far the best overview I’ve yet seen. (I’m somewhat embarrassed to admit that, having once written a Platform Studies book myself, I had no idea this one existed.) Nathan Altice does say that Ricoh licensed the rights for the 6502 as a “second-source” manufacturer, but also accepts the assertion in Bagnall’s book that Nintendo excised parts of the 6502 to avoid the MOS patents. He concludes that “Commodore’s suspicions of hardware malfeasance were justified.”

So, I’m still a little confused about the relationship between licensing and the patents. Would licensing the chip not grant the right to, you know, build the chip, complete with all of its patented components? And wouldn’t a licensee have to pay a royalty for every chip manufactured, and wouldn’t that add up to a hell of a lot of money for Commodore?

My question is: if the NES used a 6502 derivative, could that fact have remained hidden for so many years? I mean, I’d assume that hardware manufacturers would buy a console or two from the competition, and have their engineers analyze it, even if they didn’t suspect any wrongdoing. Wouldn’t that be a standard practice? (I know it’s a fictional account, but in “Micro Men”, we see BBC engineers opening up Spectrums and vice-versa, IIRC.)

Also, I know zero about NES programming, but didn’t programmers (especially Western ones, many of them already experienced in coding for 6502 processors) realize they were programming in some variant of 6502 assembly? Or did the NES use some API that isolated the processor completely?

anonymous

April 3, 2016 at 6:34 pm

Jimmy,
It looks as though what happened was that MOS had patented one particular part of the 6502, but that the chip generally was not protected. This isn’t surprising; in the US, at least, copyright isn’t available for computer chips per se under what’s known as the utility doctrine (a thorough explanation would get us way off track). Instead there is a sui generic right called a mask right (for mask works), but the law didn’t provide those until 1984. (The ‘mask’ in question relates to the way that chips are made; it’s nothing to do with like, Halloween masks)

So Ricoh licensed the 6502 properly (among other Ricoh 6502s is the RP2A10 used in Atari consoles), which got them copies of the masks needed to make them, and trade secrets on how to use them. Then, to produce the 2A03 chip, they removed a few carefully chosen transistors so as to disable the one patented part of the 6502. While the rest of that part might have still been on the chip, it would be non-functional, and probably would require physical disassembly and inspection under an electron microscope to find. In fact, I recall that Bagnall says that MOS ultimately did just that.

So if they didn’t infringe the patent, they could avoid the patent license, and presumably the license for the masks and other information they got from MOS wasn’t breached by making the 2A03. Given that MOS was cloning the Motorola 6800, you’d think that they’d anticipate getting ‘ripped off’ themselves, but I guess it slipped by them or they could have written a better licensing agreement.

Meanwhile, another obscure bit of Nintendo lore having to do with the Famicom design is that originally there wasn’t supposed to be a Famicom. The close partnership between Nintendo and Coleco — we all know the famous port of Donkey Kong — was such that Nintendo tried to license the right to sell ColecoVisions in Japan. According to The Golden Age of Video Games, by Dillon, Nintendo wanted to buy ColecoVisions for 10% over cost, while Coleco wanted to sell for 10% under retail. So it fell through, and Nintendo designed the Famicom instead. (And then flirted with licensing the Famicom to Atari, which also didn’t happen, but that story is more well known now.)

I think viewing “advancement” in gaming as a linear trend really misses the point a bit. The reason that it’s hard to go back to old computer games is because they lack a playability which came with game specific hardware. The reason iD cracking the scrolling barrier on DOS was so huge was because you could finally make a game which felt solid to play, and that’s why we refer to Nintendo games as “timeless”. They mandated a quality of play because of their lack of technology.

I think also in the history of ludic narrative there’s a lot to be said for games like Zelda 2 in how they constructed a world based on at least some degree of a fantasy canon. It’s only “infantile” if thematics are all that matter. Nintendo certainly made mistakes and I understand that you’re not someone who enjoys those types of games, and that’s fine. I try and takes things on the whole and understand how the various platforms relate to each other in positive ways, rather than wondering “what-ifs” had another platform been dominant. It’s what the market wanted for a reason.

I’ve had a few responses to this piece, in public and in private, that have taken it as almost a rant, yet another rehashing of the supremely tedious console-vs-PC wars that have been going on for time immemorial. One even called me “a disgruntled computer gamer.” My instinct is to think I must have done something wrong as a writer, as that was never my intention. But if so I honestly don’t see where I went wrong.

I don’t, for instance, see where I characterized advancement in gaming as a linear trend. On the contrary, I thought I took pains to emphasize the good of Nintendo’s walled garden — a measure of professionalism and quality control that computer gaming could definitely have used — along with the bad. Similarly, I don’t see where I’ve spent any time or energy at all on “what ifs” if another platform has been dominant.

For the record, I’m quite thoroughly gruntled by the rise of Nintendo; those Commodore 64 stalwarts I quote are definitely not speaking for me. I think the NESs performed a useful if painful service to computer gaming in blowing out the aging low end to make room for more ambitious stuff, the kind that personally interests me most. And I think Nintendo’s focus on basic quality control led everyone to raise their standards a bit and think about their poor, suffering players. You’re right that I have little personal interest in the likes of Mario or Zelda, but there’s room for everyone.

I’ve had a few responses to this piece, in public and in private, that have taken it as almost a rant, yet another rehashing of the supremely tedious console-vs-PC wars that have been going on for time immemorial. One even called me “a disgruntled computer gamer.”

Hmm. For what it’s worth, I think your piece is extremely fair and accurate towards Nintendo. It’s easy to paint a lot of what they did in the 80s and 90s in unflattering terms: they were soooo litigious and paranoid. I think you’ve been very fair to lots of computer game developers as well and you certainly haven’t come off as judgmental.

I also think this post is perfectly fair, and certainly not anti-Nintendo or anti-console at all. It’s mostly just history. Could it be that some people misread those magazine quotes as being from you?

I also think this post is perfectly fair, and certainly not anti-Nintendo or anti-console at all. It’s mostly just history. Could it be that some people misread those magazine quotes as being from you?

Though, now that I think about it — and I missed it the first time, admittedly! — the part where Jimmy insinuates that Kirby was the Zodiac Killer might be a sticking point among certain console enthusiasts.

Apologies if I made it seem like I was trying to take a side like that. I wholly appreciate your perspective as well as those old computer games, even with the difficulties that come with returning to them so far down the line.

Again though, it’s a “what if” matter thinking about how much use you could have gotten out of a C64 compared to a Nintendo. Even with the many years of hindsights and hacks down the line I don’t see there really being that much of a disruption had it remained the primary platform in the United States. In most of Europe it -did- remain the primary platform, and while the Amiga scene demonstrates a drift towards playability that would eventually become standard I really can’t see it having advanced narrative qualities in the way that I feel like you’re proposing.

At the end of the day I think it’s the nature of how we look at games as individuals. My view of the ludic narrative being advanced was the point when the technology could allow the player to divulge depth at their own discretion that went beyond reading in between the lines of text. I think Nintendo games achieved that in time and it has been positive on the whole for the medium, rather than limiting.

It is true the NES Trojan Horsed its way into homes using a system designed to look like a VCR, even many people I knew in the mid to late-80’s all had them in their living rooms on top of or next to their VCR. Then, they called it an Entertainment System, instead of a video game. Arcades were still doing quite well, and the computer market was exploding, most of the friends I had who had Atari were switching to other devices, with Nintendo being the main one to pick up where the VCS left off. Nintendo did more than switch many of those gamers back, they expanded the market. Suddenly, playing Nintendo was a thing. You did it after school, on weekends, with family, friends, etc. You didn’t do that as much on Atari, as that was primarily a young male solo, maybe 2 people, system. The NES had accessories meant to play with more than 2. They had up to 4, the average family size so an entire family could play together.

By expanding the market from its peak in 1983 (funny how it’s considered to have crashed the same year it peaks), they not only became a household name, they made it a Japanese controlled market. America may have won WWII, but Japan has won the economic war ever since!!

As a teenager in the UK in the 80’s, I recall the NES never having the same impact as say stateside or Japan. As the micro computer was still at the heart of the games industry and Amiga / Atari ST looking to take the baton – the NES and its counter part Master system never did get the kids dumping their spectrums and commodores just yet.

It was when the SNES and Megadrive (Genesis) came along that consoles started to switch from the personal computer market towards the scale we know today.

I have a NES and still enjoy playing this from time to time and consider the machine as still being one of the best 8 bits ever made!

I’m glad someone else jumped in first with a view from the UK since I sometimes feel I might be misrepresenting the situation when I share my own limited experiences of the 80s. :-)

Being heavily focused on microcomputers, I don’t remember being aware of the NES during the 80s. I started to become aware of the rise of consoles in the early 90s when a flatmate at university acquired a second hand Sega Master System. Sega had plenty of memorable TV adverts (https://www.youtube.com/watch?v=GAOyMJHgNVw) but I don’t remember any from Nintendo.

This is why I find it strange to read mainstream histories of gaming that go from VCS to C64 to Nintendo to MS-DOS and the fourth/fifth generation consoles. It’s interesting to see things from that perspective but it’s completely alien to me.

All throughout Europe the NES didn’t do really good. Nintendo simply had no presence here. They licenced the NES out to Mattel who didn’t do a lot with it, then Nintendo took back the rights (and made their NES and games incompatible to the Mattel NES and their games, while also splitting Europe into Pal A and Pal B, so nobody even knew what exactly to buy and what would work on their console) and their release policy was crap. Super Mario Bros. 3 was released in 1992, several years after the original release. Between 1992 and 1993 they released Mega Man 2 to 4 in quick succession because they were lagging behind so much.
Europe always was more into Computers, be it the C64, the Amiga or beginning in the late 80’s the PC but consoles were popular. The Sega Master System, while being a failure in North America, was a big success in europe. It easily outsold the NES in England, France, Spain and Sweden while competiting with the NES in Germany and Italy head to head.

The Sega Master System never outsold the NES in Sweden. Quite the opposite; the ratio Nintendo—Sega was probably 10:1. That’s why the Nintendo licence is still handled to this day by the same distributor as in the 80s since they always did so well.

When you look closely at Nintendo’s strategy back in 1986-88, a name that stood as discredited as Atari’s around the time comes to mind: STEVE JOBS. Deliberately or not, Nintendo took his design/business philosophy and reinforced it with an
injection of japanese discipline, efficiency and thoroughness : a closed system (technical specs kept from the public, no expansions) , an “appliance concept”
(the NES-as-vcr-companion) , end-to-end control (cartridge supply, licensing, distribution) , painstaking attention to detail (controller size vs. children’s hands, for example) , a coherent “design langauge” for software (just as Mac software was easily recognizable, all Nintendo games had that pervasive “Nintendoness”). As we say in Spanish : “Nobody is a prophet in his own land” , which means that Steve must have laughed a lot at his peers in the computer industry once Nintendo’s strategy became apparent.

The ultimate goal was to create a unique user experience that trascends any specific “application”.

Many fairly popular arcade games became smash hits and classics after getting the “Nintendo treatment”, either directly by their original developers, or in the form of “new” titles “borrowing” other games’ concepts. Some examples :

The Double Dragon franchise from Technos became a “gold-standard” for the brawler genre when its first and specially its second iterations were released on the NES, even though the hardware was very limited compared to the 16-bit arcade or home versions. It was the “Nintendo treatment” : exclusive levels, new and improved music tracks, smooth difficulty progression, exclusive gameplay modes, improved story elements, exclusive characters, not a single programming bug, exclusive secret codes. Technos also published the definitive (in my opinion) brawler of the 8-bit era : NES-excluisve River City Ransom, which took the concept of Taito’s Renegade (also available on the NES and pretty good) to a very high level of polish and gameplay depth.

Much of Konami Inc.’s arcade catalog really shined on the NES, specially Contra and Super C, which lots of people I know think were available only for that system. But it was their developed-for-Nintendo releases that really put Konami on the console-publisher pantheon : titles like Track N’Field, which brought Epyx “Games”-style play to the NES and consoles in general with a very polished execution; the Castlevania and Metal Gear franchises (still going strong on today’s systems), both of which started life as rather bland offerings for the MSX platform and became true classics after a total overhaul on the NES.

Nintendo knew that a majority of titles wouldn’t be good but made sure they weren’t worse than mediocre (think the countless movie tie-ins), offsetting their negative effects with top titles (developed in-house or by others) that were both spectacular and as system-specific as possible.

One ironic aspect of all this is that Nintendo proved to be a better Steve Jobs than Steve Jobs. :) Apple fairly quickly started backing away from some of these philosophies with the Macintosh, especially after Jobs’s ouster. The Mac Plus that appeared in 1986, for instance, was already a much more conventional computer, with expansion slots, etc.

No, it was the Mac II and the SE that had expansion slots. In fact I’m pretty sure you’re thinking of the II.

The Mac Plus had RAM slots (which were a pretty new thing at the time) and a SCSI port. (And while SCSI was far and away superior to the previous expansion ports available, there were already drives and such that attached to the serial or floppy ports, so SCSI wasn’t a big break with the past)

Well Jimmy, maybe you have made a very novel insight there and I am ecstatic to have contributed.

Maybe I can further your idea a little : Apple basically started digging their own grave by rebelling against Steve’s original gospel. This is particularly evident in their shameless urging to users to purchase 3rd party graphics cards employing standard chipsets which were also used in IBM-compatible cards, essentially dumping the closed-system philosophy. Meanwhile, Nintendo adopts a radical Jobs-style closed-system-plus-end-to-end-control philosophy that puts them at the very top.

Could it be that the success of Nintendo and its later followers Sega and Sony had something to do with Jobs returning to Apple and kicking out Sculley and implementing his original strategy through the i-line encountering little, if any, dissent internally or externally ?

A little edit/clarification : I mean Mac II graphics cards. Apple’s own were pretty basic, sold separately, and employed 3rd party chipsets anyway. That’s hat I meant by “urging”, metaphorically. Apple’s video cards, brand loyalty aside, were a shabby deal. For comparison, the Amiga only got 3rd party graphics cards in 1992-93, when its OS finally started supporting them.

I think any connection between the styles of Jobs and Nintendo are very, very superficial. The only real difference between the Famicom/NES and earlier consoles is the walled garden, and that was mostly a Nintendo of America (systems in Japan didn’t even HAVE a lockout chip, which was also the case for the second-generation NES toploader) thing rather than a thing of Nintendo as a whole, intended to address specific market concerns.

In fact, nearly all of the shady things Nintendo was (rightfully) accused of back in the day lie entirely at the feet of the American branch. These could fill an article by themselves (developers were barred from releasing more than five NES games in a year or they’d be banned from ever having a game licensed for the console, stores that sold unlicensed games (somewhat justifiable) or carried any competing product (the Sega Master system, Master System games, Atari consoles/games, the Genesis when it came out, etc) would be blacklisted, Nintendo had to make all the cartridges and set steep minimum orders, etc.), which would be beyond the scope of this blog, but their most damning crime probably is not.

Nintendo of America is single-handedly responsible for the reputation of video games as nothing more than toys for children. In the Atari age, games were too simple to be pigeon-holed into age rankings (apart from the “Pornographic” games that ranged from poor to vile in concept and from vile to extremely vile in gameplay), and games for the Famicom ranged from G to R in MPAA terms (E-M in ESRB), not that they were rated that way at the time, but not only did Nintendo of America bar essentially any game that wouldn’t receive a G rating (PG would generally fly in the early days of the console, but by the end of the NES era and about half-way through the SNES era until the famous Mortal Kombat fiasco*) from import (often inflicting massive censorship on them to fit) but devoted 100% of their marketing toward kids (the Famicom was, appropriately for something named the “FAMIly COMputer”, marketed as something for the entire family, with games for Mom and Dad as well as the kids). The greatest casualty of this is the console scene, with many people considering a game childish or even infantile merely because it was made for a console, but I’m all but certain that a lot splashed onto computer games as well.

I’m not sure what evidence of a direct link can be found, but I think that it is no coincidence that, just as the NES peaked in ’88 through ’91, blood and nudity began to disappear from PC games, even those intended for an adult (as opposed to an “adult”) audience, and the releases for Wolfenstein 3D (’91) and DooM (’93) were marked with fierce “won’t somebody PLEASE think of the children?” controversy when so many earlier games were not -before Nintendo electronic games weren’t automatically for kids.

I don’t disagree at all with most of this, but do think your last paragraph is a stretch. I’ve been immersed in the computer-game industry of the 1980s for a long time, and have a) never noticed the phenomenon of which you speak and b) never heard the concerns you suspect led to it expressed or implied by anything I’ve read or anyone I’ve spoken to. I would actually say the industry grew noticeably more willing to take chances on risque content already in the latter 1980s, in the wake of the hugely respected Infocom taking a flyer on Leather Goddesses of Phobos and winding up with a big hit. Quite a number of “adult” adventures followed — most notably Leisure Suit Larry, but there were many more than are often remembered today. Even Wizardry VI in 1990 filled its dungeons with topless babes.

The visceral violence that began to appear in early 1990s PC gaming, and the relative lack thereof in earlier years, is rather entirely down to graphics technology. Castle Wolfenstein and especially Doom appeared just about the instant that the technology allowed them to exist. They demonstrated to the industry at large that there was a major audience out there eager for as much blood and visceral violence as possible, and this proved a wonderful way to differentiate PC gaming from the censored world of the consoles.

This isn’t really your fault at all, but it does kind of depress me that when we talk about making games more “adult” we always end discussing their levels of sex and violence. One might argue that that discussion is as much a symptom of gaming’s ongoing infantilization as all of Nintendo’s cute, bouncy, cartoony, and ultimately meaningless games for children.

I think you’re looking too much at the outliers here. Yes, there were still some adult-humor games, just as there was Ultima IV when nobody was aspiring for any kind of moral analysis in games. Yes, a few games continued to eschew painted-on underwear in their customizable characters and make sure that feral humanlike monsters and “uncivilized” tribes didn’t wear clothes that didn’t make sense. But far more didn’t, and there’s a very clear trend starting in this era for the vast majority of games not only to stay in the “society and the market is comfortable with this” zone, but “this won’t make parents afraid of this existing”.

Note that I’m specifically focusing around the years 1988 (when the NES achieved total ubiquity) through 1996 (when the first generation raised alongside Nintendo started to reach adulthood, and consoles made by companies without a strict content policy dethroned Nintendo for good. Toward the early end of that span, any effect is pretty minor, while toward the end people were already testing the limits.

Using DooM as an example, of course violence of that degree wasn’t really possible much before that year, but it’s more the attitude it received that I think is important. A huge chunk of the public looked at it, considered it to be intended for kids solely because it was a computer game, and were outraged that it existed. Part of that was the standard New Media Are Evil reaction, but I don’t think it would have happened if Nintendo hadn’t pushed “video games are for kids” so aggressively and allowed games like “Sweet Home” (a horror RPG), the Megami Tensi RPG series (the gameplay revolves around demon summoning, and capital-G God is frequently a possible boss fight), or (in the SNES era) Tactics Ogre (a very political Strategy RPG dealing with race war, false-flag massacres, and other very dark subjects).

As for why we always end up discussing sex and violence when looking for something to make games more adult, it is because it is pretty easy to look at them and say “this level is OK for kids, and this level is not”; although everyone will put the line in a different spot, at least a general consensus can be reached. When you start talking about complexity of plot, philosophical/religious overtones, how much you have to keep track of for success, and other things of this sort, you wind up spending the entire conversation trying to define the line in the first place.

Sorry, my friend, I simply don’t believe the trend is there, and abstractions like you’ve indulged in here do nothing to convince me. If you really *do* wish to convince me, you need to give me a lot of examples of computer games from before this era that did include risque content and preferably at least some concrete examples from during this era of publishers deliberately choosing to tone down their games out of the concerns you speculate were in play.

In the meantime, the “outliers” make for one hell of a long list in the absence of concrete counterexamples:

— Leather Goddesses
— 4 Leisure Suit Larry games
— 2 Les Manley Games
— Sex Vixens From Space
— Strip Poker, with half a dozen additional “data disks”
— Wasteland with its prostitute, its “Wasteland herpies,” its blood-splattered medic, and its gleeful descriptions of ultraviolence
— Neuromancer with its own prostitute, its “happy ending” massage parlor, and its sexy swingers BBS
— the Gold Box games with their cheesecake Boris Vallejo-inspired art
— Dragon Wars with its own cheesecake art, and casual profanity in the manual (“You’ve paid good money for this game, so you can do whatever you damn well like with it”)
— Shogun, where you have to type “make love to Mariko” to finish the game
— Romantic Encounters at the Dome
— Corruption, whose plot revolves around your wife’s infidelity with your boss and where you can snort cocaine
— the heaving bosoms and softly lighted sex scenes in almost every Cinemaware game ever made
— the three Spellcasting games
— Wizardry VI and VII with their bare-breasted warrior maidens

“In fact, nearly all of the shady things Nintendo was (rightfully) accused of back in the day lie entirely at the feet of the American branch.”

This isn’t true. Namco and Hudson Soft both had deals signed with Nintendo before they enacted policies in both Japan and North America. Uemura talks about it in the Iwata Asks on the Famicom, though he doesn’t give full details. Namco chose to go to the Genesis because of their inability to get the same terms from NCL, not NOA.

Of course much can be laid at NOA’s feet, but I think it’s been a misinterpretation that their division was about nothing else but creating restrictions. See an above post for a link to a podcast which divulges more on NOA’s exact measures.

I’m a bit late to the party so I won’t try to jump in to any of the discussions (even if Double Dragon on NES is far from bug free), but I did want to mention a few things.

The article mentioned Ultima on NES, but the system also got at least one Wizardry. I spent some time butting my head against my neighbor’s copy, as he had no interest in trying to figure it out.

At least in Japan, the system did have a disk drive. The Famicom Disk System lasted for several years in the market, and was the initial home for major franchise starters like The Legend of Zelda, Metroid and Castlevania.

While the FDS was mainly a lower cost alternative to cartridges, the Famicom did have a few initiatives to expand it towards PCs. Off the top of my head, I know it had a modem, and also that it received a version of BASIC. There was a keyboard attachment in support of this, and I believe programs could be saved to cassette.

In Game Over, David Sheff spends a lot of time obsessing over these moves. He wants to see the NES as essentially a Trojan horse: get systems into American homes as videogame consoles, then expand them into full-fledged computers to take over the PC industry. This probably says at least as much about the American fear and paranoia about Nintendo and Japan in general that still persisted in the early 1990s, when the book was written, as it does about Nintendo’s real plans. Interesting how works of history themselves become a part of history.

Anyway, like some of the details other commenters have provided, it’s certainly appreciated but was just a little further down in the weeds than I wanted to go in this one-article overview of a big and complicated subject.

Fair enough. I just thought it was interesting in light of similar moves made by Atari and Coleco. There was actually an earlier design before the NES that positioned it more as a computer. If I’m remembering correctly, Nintendo dropped it due to a poor reception at CES.

I think you’re correct that the hand-wringing about the NES being expanded into a real PC was unfounded. It’s never been Nintendo’s business, and many of these peripherals couldn’t even be used at the same time, as they connected to the system via the cartridge slot and didn’t feature any sort of pass-through or the like. The Famicom Disk System was just a cheaper, higher-capacity alternative to carts, with saving your game as an added bonus. By the time we would have gotten it, larger carts had become economical, so they didn’t bother.

Anyhow, I’m personally glad that neither consoles or PCs ever completely dominated the other. Their divergent evolution lead to a greater variety of games, which I think is the way it should be.

That would surprise me, but of course being a gamer myself – and an avid Sonic fan – I can’t possibly be partial and speak for non-fans. :)

There were Sonic cartoons and comics, even. I remember liking them. I don’t remember them being any better than the Mario film, but I liked THAT one too. Ah, to be too young to bother about quality in entertainment…

Yeah, the reality is Mario has no serious competition. Pac-Man was huge in the early 80’s but he hasn’t been relevant in thirty years. He might be close to Mario in recognizability if you limit your poll to people over 40, but otherwise no. Likewise Sonic was huge in the 90’s (and remains much more relevant than Pac-Man today) but is a long, long way below Mario. Mario’s real biggest competition for most recognizable character is probably the star of Nintendo’s other long running franchise… Link. But Mario wins out if only because his name is the title of the series and Link’s isn’t.