Dreams of the Rarebit Fiend

Just a few days ago I released JSCrate.com, a site to gather together news, links, and original writing on JavaScript and related tech (so you will see some HTML and CSS mixed in there on occasion).

DailyJS used to fill this particular niche but it hasn’t been around for quite a while now and I thought I’d take a crack at it myself. Just today I put up my first original piece which is related to connecting your JavaScript app to IFTTT to give you a way to connect to hundreds of other websites, services, and even hardware.

There’s lots of odd stuff I see here and there about Angular 2 being a “failure” or similar odd turns of phrase. I guess in order to really refute that we’d first have to come to an agreement on what success and failure constitute in the case of JavaScript application frameworks. React and Ember aren’t as big a success as AngularJS (at least not if you look at Stack Overflow questions, search volume on Google, etc.). Are they then failures? I certainly don’t think so. If so, most frameworks are going to get a big “failure” stamp on them.

Angular 2 may not sweep aside React, Ember, and AngularJS, however, there are reasons for some people to be interested in it and some of those people should probably adopt it in favor of continuing with whatever they’re using today. Let me tell you why… Angular 2, or something like it, is the future.

I mean that in a very literal sense. Angular 2 is the future because it’s built from the future.

Angular 2 is built on future JavaScript

To adopt it you’re going to be strongly pushed in the direction of TypeScript. I’ll be the first to admit that I’m not in love with the “Type” part of TypeScript. I used strongly typed C++ and Java for a large part of my career and I’ve never mourned not having that strong typing in JavaScript.

But I love the extensive amount of ES2015 which it embraces and makes available to me now, today, for use in my apps. I could of course skip using TypeScript (or Babel) since Firefox, Chrome, Microsoft Edge, and (announced today as I write this) Safari are all ready to support ES2015.

However, there’s a fly in that ointment and it takes the form of IE11. Unlike IE8-10 which Microsoft effectively killed in January 2016, they’ve committed to continuing to support IE11 until they’ve end-of-lifed Windows 7, Windows 8, and Windows 10! Think about that for a second. That means that unless you’re willing or able to just blow off any user running IE11 (or an older version of Firefox/Safari/Chrome), you’re going to continue using JavaScript ES5 (circa 2009) for years and years or you can get used to using a transpiler like TypeScript.

Note: This is something which affects other frameworks just as much as it does Angular 2. Many developers are going to adopt a transpiling solution in the future to be able to start using modern JavaScript conventions even if they prefer to use React, jQuery, or whatever. Picking another framework doesn’t opt you out of this problem.

Angular 2 is built on future web browsers

The components you build with Angular 2 are designed in such a way that Google can easily leverage WebComponents technology to power them in the future for those browsers which support them. In fact, doing so will actually improve the quality of some aspects of the components because they will be able to use CSS contained very tightly to the component with no concern of outside CSS leaking in or contaminating the styling of pages which use them.

Again, WebComponents are part of the future for browsers. I want them today, I will want them in the future. In fact, I would like to imagine a beautiful future where I can build an app and mix-and-match WebComponents built using Angular 2, React, Polymer, etc. and it not be painful or slow to do so. Angular 2 is emulating a large part of what they offer today because they anticipate you’re going to want it anyway in a couple of years.

Angular 2 is built on modular JavaScript

Even if you’re using one of the fancy transpilers like TypeScript or Babel, or just the ES2015 support built into the cutting edge browsers, you still don’t have a great solution for loading JavaScript built in. The module loading recommended for the future of JavaScript is something you have to (again) emulate if you want to use it today. Angular 2’s solution for that emulation is SystemJS. It allows you to use import/export in your code so your HTML file only directly imports a small handful of .js files and the rest are handled by the emulated module loading code.

It’s the future; today. And, like the last two things I talked about, eventually it’s going to become an issue for developers of any framework. In this case no browser supports the functionality, even the cutting edge ones, so your choices are either emulate or do without.

Angular 2 continues the idea of a complete ecosystem

I credit Ruby on Rails for giving us the idea of a complete ecosystem which solves 80% of your problems out-of-the-box. You can say this isn’t “the future” like the three things above, but I would argue it is. I still believe that for most developers, a good framework which covers the bases well is a better bet than mixing together their own set of libraries for testing, routing, etc.

When you buy into Angular 2’s ecosystem, you don’t just get ES2015 JavaScript, modular JavaScript, and WebComponents. You also get solutions for unit testing, end-to-end testing, dependency injection, routing (though that sometimes appears to be the “router-of-the-day”… *sigh*), and event handling (RxJS is so much better than Promises, trust me). Also, you’ll get books and video courses (full disclosure, I’m working on Angular 2 Essentials right now), and be able to hire developers who already know most of your front-end stack before they start working. I consider that a powerful advantage.

In conclusion

Will Angular 2 be as popular or more so than all other frameworks? Maybe, maybe not. However, there’s enough here already to make it appealing to developers who are tackling multi-year projects and in need of a long runway without worrying that the framework they pick will likely become obsolete due to changes in the browser or language. For those developers, I doubt they’ll end up characterizing it as a failure.

If you’d like to save a few bucks on my Directives course (or one of many other books/videos on AngularJS) there’s still a couple of days left in Packt’s Angular Week sale: http://bit.ly/20hJw8

If you want to be ready for Angular 2 I’d advise you to get really comfortable with Directives now. Their heir in Angular 2 is the Component and everything in Angular 2 rotates around it. The old AngularJS 1 controller and view (both of which already had equivalents in the Directive) are done away with in favor of just nesting Components from the top of the UI to the bottom.

P.S. My course has a 4.6/5 rating over at Udemy so somebody must have liked it!

Learning AngularJS Directives covers everything you need to know about how to build and use directives to extend your HTML and reach the next level in your AngularJS education. Never having done an online course like this, it was as interesting and educational to me as I hope it will be for anyone who watches it.

First off, let me say one really important thing. If you want your next project to succeed, I believe you should automate your server setup and deployment from day one. If you find yourself doing a particular thing even twice, it’s time to write down the steps to do it in a form you can run again; not just make a note of it in Evernote or on a piece of paper.

If you do that you will always be keen to do another deployment whenever you want to make a small improvement or fix a bug. Especially when you’re returning to the project weeks or months later and the process to do things is no longer fresh in your mind. Any solution at all is better than no solution at that point.

PaperQuik and ClearAndDraw

Last year I threw together a couple of smallprojects, grabbed a DigitalOcean server (thanks for their $60 credit from ng-conf last year), figured out how to setup my own server and deploy everything to create some new websites. I did everything from start to finish myself and I felt duly proud about having built something from scratch and launched it no matter how small it was.

But there’s a siren song that every developer feels when working in a particular language, be it Ruby or Java, JavaScript or Python, that all your tools should be written in your favorite language. You want your web server, build tools, continuous integration server, and sometimes even your editor built using the language you use most of the time. The same things get rewritten over and over to support that idea and although I hate the wastefulness of it, I certainly understand the feeling and everything I used to do my administration tasks for my projects was Bash shell scripts and I didn’t really like them much.

Then I started seeing new deployment and automation tools like Flightplan and Shipit come out specifically for JavaScript developers. Both are focused on the deployment part of things and not the building and development automation tasks which Grunt or Gulp focus on. So I thought it might be interesting to try and replace my shell scripts with these tools to see how easily they could do the same jobs. The tasks covered by my scripts were: initial machine configuration, updating of Ubuntu (mainly to get security fixes), deployment, and creating a SSH shell to the remote server. It’s not a lot, but that covered everything I found myself doing repeatedly as I put together my projects.

Flightplan

First I started with Flightplan. It’s a tool supposedly similar to Python’s Fabric tool. Never having used the latter I cannot say anything about that. What I can say is that I was able to cobble together a flightplan.js file which allowed me to do three out of my four tasks (you can run an SSH shell under it but not easily interact with it, so I abandoned that).

I might not be making the absolute best use of Flightplan because I was trying to import the same commands I had used in my shell scripts into the flightplan.js file to create tasks in it. However, it worked and it was pretty straightforward.

Notice the way the install task (or the deploy task) switches back and forth between sections which are for local and remote. I don’t think I messed that up, it seems like the normal structure for Flightplan and it seems odd to me. Doubly so because Flightplan doesn’t seem to have a mechanism for task dependencies (I need to do task A before I do task B or task C and I don’t want to repeat that code).

Pros:

All the commands within a task execute sequentially so it’s an easier transition from something like a shell script.

Allows you to specify multiple servers and will allow you to run tasks simultaneously against all of them. Not anything I need at this time, but you never know when a project could grow from one server to two.

Cons:

A given task is broken up into local and remote sections and they run sequentially based upon them all having the same name. There doesn’t seem to be any way for a given task to specify that it has dependencies upon other tasks being executed first (for example, maybe I do a directory cleanup before several different tasks).

Although it’s easier to deal with serially executing code, if you have several actions which would execute more quickly in parallel Flightplan doesn’t really support that.

Shipit

Then I built the same thing again in Shipit. As with Flightplan, it claims similarity to another tool as well, the Ruby deployment tool Capistrano in this case. Again I have to claim ignorance on this never having used Capistrano. Here’s the same set of commands (install, deploy, and upgrade) using a Shipit file:

// Running this requires installing Shipit (https://github.com/shipitjs/shipit).
// Then use commands like:
// shipit production install
// shipit production deploy
// shipit production upgrade
module.exports = function (shipit) {
require('shipit-deploy')(shipit);
shipit.initConfig({
production: {
servers: 'root@PocketChange'
}
});
var tmpDir = 'PaperQuik-com-' + new Date().getTime();
shipit.task('install', function () {
shipit.remote('sudo apt-get update').then(function () {
// We'll wait for the update to complete before installing some software I like to have on the
// server.
shipit.remote('sudo apt-get -y install apache2 emacs23 git unzip').then(function () {
// We don't need the following set of actions to happen in any particular order. For example,
// we're good if the disables happen before the enables.
var promises = [ ];
// We couldn't copy this file earlier because there isn't a spot for it until after Apache is installed.
promises.push(shipit.remoteCopy('paperquik.conf', '/etc/apache2/sites-available/'));
promises.push(shipit.remote('sudo a2enmod expires headers rewrite proxy_http'));
promises.push(shipit.remote('sudo a2dissite 000-default'));
promises.push(shipit.remote('sudo a2ensite paperquik mdm'));
// But we do need this to wait until we've complete all of the above. So we have it wait until
// all of their promises have resolved.
Promise.all(promises).then(function () {
shipit.remote('sudo service apache2 reload');
});
});
});
});
// This shipit file doesn't yet use the official shipit deploy functionality. It may in the future but
// this is my old sequence and I know it works. Note: I also know theirs seems like it might be
// better because it can roll back and I definitely do not have that.
shipit.task('deploy', function () {
shipit.log('Deploy the current build of PaperQuik.com.');
shipit.local('grunt build')
.then(function () {
return shipit.remoteCopy('dist/*', '/tmp/' + tmpDir);
})
.then(function () {
shipit.log('Move folder to web root');
return shipit.remote('sudo cp -R /tmp/' + tmpDir + '/*' + ' /var/www/paperquik')
})
.then(function () {
shipit.remote('rm -rf /tmp/' + tmpDir);
});
});
shipit.task('upgrade', function () {
shipit.log('Fetches the list of available Ubuntu upgrades.');
shipit.remote('sudo apt-get update').then(function () {
shipit.log('Now perform the upgrade.');
shipit.remote('sudo apt-get -y dist-upgrade');
});
});
};

Sorry for the small text above, the line wrapping was bad if I didn’t reduce it. Here’s the original over on Github. The huge and most obvious difference here is that Shipit wants to do all of those Apache configuration commands in parallel. So I let it. I just added a little bit of code to delay restarting the server until all of them have been completed (you can see that around line 37 above). Likewise the deploy and upgrade tasks want to execute steps in parallel and I can’t always let it do that. Since all of the asynchronous actions in Shipit return promises I just added a little bit of code in each task where I need to control the order in which things happen and it works.

Pros:

Executes commands within a task in parallel to achieve maximum speed.

Allows you to specify multiple servers and will allow you to run tasks simultaneously against all of them. Not anything I need at this time, but you never know when a project could grow from one server to two.

Supports tasks which run other tasks (or which broadcast/sink events). Thus dependencies for tasks can be handled.

Cons:

The documentation. Seriously, come on. I’m going to have to contribute to this project just to fill out the documentation some.

Harder to structure serial commands which need to execute in a particular sequence.

Thoughts

You see what I mean about people rebuilding the same tools over and over again just using different languages. Both Shipit and Flightplan claim similarity to previous tools for Ruby and Python. However, at the same time I have to confess I don’t find either of those particularly appealing to use when all I use day to day is JavaScript. I used Java for over ten years and I still don’t want to do all of my build and deployment with Ant. When I wanted to control the order of the asynchronous events in Shipit, it was nice that I could easily see how to do that from my experience with JavaScript promises in AngularJS and Node.js.

Both tools allow you to run tasks against multiple servers simultaneously. Both allow you to have multiple sets of servers so you can have staging servers or, if your just playing around like me, a Vagrant server you bring up and down just for testing purposes. Either could probably do your administration jobs, but I just liked Shipit a little bit better because it seemed more powerful. Going forward I’m going to probably pull the Flightplan files out of the master branch of my projects and leave them up only for reference from this blog post. Now I just need to see if I can do something about that Shipit documentation.

Any time I see the latest “I Hate AngularJS and So Should You” article I always skip straight to the end because that’s the very best part of all of them. It’s the fun part where we get to hear what the author of this particular piece is going to advocate you use instead. Here are the usual suspects and my highly uncharitable response to each one:

I’m writing my own framework now

Bonus points for this one if it’s accompanied by a link to their new half formed idea on Github. It should continue getting commits for at least a couple of months.

There are literally dozens of front-end frameworks at this point, but theirs is going to be way better than any of them. Look, there’s really only one or two guys who will work on it, but they are stellar programmers. God knows they are going to do a much better job than programmers at Google, Facebook, or the likes of Yehuda Katz and Tom Dale.

TodoMVC is beginning to look like one of those four page resumes you get these days with all of the “frameworks” that they have examples for. If you don’t believe me, be sure to look at their “Labs” tab. Yes, they have so many they had to put in tabs.

Backbone.js

Ha ha ha ha ha ha ha ha hahahaha haha ha ha ha ha. Oh god. I may have hurt myself. This person is so upset about how “heavy weight” AngularJS is and “complex”. Look for lots of mentions of how things should be “minimal” and “simple” and at least one mention of how many lines of code Backbone.js is vs. the object of their derision. I figure their house looks like this:

I did Backbone.js for two years, that’s why when I went somewhere new I put them on AngularJS instead. I really hope the people who advocate going back to Backbone.js have to work on a large team of mixed skill level developers. The unskilled ones will make a hash of any framework but what they can do with Backbone.js is just amazing.

That New Framework That You Just Heard About on Hacker News Two Weeks Ago

This is the framework from author #1 above. It’s going to solve all the ridiculous mistakes that AngularJS made and probably all of those from the other major frameworks as well at the same time. Ultimately it won’t get anymore updates, but that’s OK because it only got used on one project before our author realized it not only had as many problems as the major frameworks but many many more. Plus it gives him/her an opportunity to tweet about the abandonment of this framework and the excitement for the next new one.

The Chinese Menu Framework

This is the idea that sticking together a bunch of different best-of-breed pieces to make your own framework is perfectly viable. Just pick something from columns A/B/C/D and start using it. You’ll find lots of people who can answer your questions, there are many books and videos for that particular combo of tech, and there are developers out there by the hundreds you can hire who will have no problem diving right into your projects.

Ha ha. I’m kidding. It doesn’t really work that way. Pick an arbitrary grab bag of stuff and maybe you’ll make some excellent choices. But you’ll have to live with that decision for quite a while. Even a less popular stack like Ember.js is going to get more third party support than whatever you decide upon for yourself.

Again, I council rationality

Above all, please do a quick experiment for me. The next time somebody tells you that AngularJS is a dead end and you can’t rely on it for years to come, ask them what they would have recommended back in 2013? Just two years ago. What set of stuff would they have advocated then that would be doing so well today and have this long lived future into 2017+ that wasn’t AngularJS? Backbone.js? I don’t know of anything.

My point is this, the front-end and JavaScript tech is changing at a rate way too high for anyone’s predictions about two and three years down the road to have a lot of merit. AngularJS seems like a reasonable bet to have done well and have lots of info available about migration from 1.X to 2.0 so at the moment I’m still on that path. In the meantime I hope to learn more about Facebook’s stuff to see if it gives me useful ideas or to see if I can incorporate parts of it into AngularJS (Flux seems interesting for instance and would likely slide into most of the frameworks). But the people who speak with such certainty about the future… Maybe they don’t see it as clearly as they think.

AngularJS is not perfect. I’m not about to say that it is. It has problems, over time they’ve been worked on and reduced. I’m sure if I went and picked up React/Flux/Relay/whatever (come on Facebook, give a name to your stack!) or Ember.js I’d see much the same things. Lots of great people are working on them and they have thousands of adopters. Most of the time for most projects it works pretty well.

If you’re having problems with AngularJS it may be that you need to learn more, look at some open source, maybe even pull in a mentor with more experience. Alternatively, if you’re struggling and you think you’ve put in more than enough effort, look at one of the major alternatives and see if it works better for you. I haven’t put in as much time on Ember.js but I’ve looked at Facebook’s offering and it is very different than what Google put together.

Recently a “eulogy” for RadioShack was making the rounds online. Let’s ignore for the moment that’s a little harsh to have a eulogy before somebody is even dead but RadioShack is definitely on life support so I certainly understand why now seemed like the time. This could easily be their last Christmas.

The thing that struck me was how different the experience was from my own. I grew up in Fort Worth, TX and RadioShack has been here, well, forever. After I graduated from college in the late 1980’s, I went to work for the Tandy corporation from 1987-1992 (and then a couple of more years at AST Computers after they bought Tandy’s computer business). So I thought I’d give the company a different eulogy, one from the perspective of a different era and a different part of the business and one that’s perhaps more nostalgic and melancholy and less bitter.

I

It starts with the Texas Employment Commission (TEC). During my summers off from attending Rice I had taken one job making pizzas at Mazzio’s and another working in the Plans & Specs division of the Army Corps of Engineers. Trust me, if you are ever given that choice, pick pizza.

After my job at the Army Corp I was cured of taking any job just because it wasn’t food service. I went down to the TEC and told them I wanted something where I would be programming. I figured after years of BASIC programming on my own and three years of learning languages at school somebody would want to hire me to do something. But the response from the lady at the TEC was to a) forget any idea of doing something like that or even computer work of any kind and b) maybe she could find me something that wasn’t menial labor, but I shouldn’t expect much just because I was almost done with college.

Fortunately, I completely ignored her horrific depressing advice (and I mean depressing in both senses of the word, she seemed as depressed as the advice she gave) and went down to fill out an application at Tandy. They hired me quickly and told me I could come in and test software. I think I did it for about five days before they realized I knew Pascal, Modula-2, C, some basic Unix commands, and more. I was immediately moved over to start programming in C for Tandy.

II

The people I had gone to work for in the software division were working on the Varsity Scripsit word processor. It was a pretty good little word processor which ran on MS-DOS on PCs and several decades before the mantra of “eat your own dog food” became common, most of the team was actually using a stripped down version of the word processor as a text editor to edit the code for the the word processor! The Scripsit word processor line had been fairly successful for the company on previous machines (I think the Xenix based Model 6000 and others) so this was one of their first forays into PC applications.

Since the core of the project was already pretty solid, most of the team was working on a multitude of expansions for it including:

A Calculator

Printing graphics on dot matrix printers

Dictionary/Thesaurus

Macros

The list went on and on

However, after adding all of that, memory constraints on real world machines made it clear that it wasn’t going to work with the kitchen sink attached to it, so the dot matrix graphic printing I had worked on and several other features all had to be removed to get it to load and run. C’est la vie.

P.S. There were seven people working on this software, including Kevin (more on him later) who had written the editor/core of the word processor and was only one of two people in the crew who had a hard drive in his machine. Every other machine was floppy only. You’ve never experienced software development until you’re doing all of your editing and compilation off of 5 1/4″ floppy disks.

III

After I went back to school, either I contacted Tandy or they contacted me, I can’t remember which but they told me they would really like me to come in and work even during the brief period I would be home for the Christmas holidays. This was a) enormously flattering and b) a source of serious money for a kid in college. I think I might have given some real gifts that year.

Tandy was working on their Tandy 1000 series which were actually not clones of the IBM PC but of the IBM PCjr. They had graphics built in (320 x 200 in four colors! Booyah!) and thanks to some really clever engineering from one of their crew they were adding digital audio by piggy backing on the existing hardware they had added to support joysticks. Apparently the digital to analog converters had multiple uses and he figured out how to use them for something which wouldn’t be common other PCs for years to come (think SoundBlaster cards) with only a few cents of additional cost.

As with Varsity Scripsit the digital audio recording and playback software (DeskMate Sound) was being written again by Kevin (yes, he really was that good). He was hard at work on a music program (DeskMate Music) which actually used sampled instruments digitized with DeskMate Sound.

If you don’t want to watch it all the way through, skip to ten minutes in and listen to the piano. Kevin was resampling notes from a handful of actual notes which could be loaded into memory for each instrument (there was not nearly enough memory in those days to have a full range of high quality samples for each instrument so he was adjusting them on fly to make missing notes). I still marvel at it.

IV

My boss for both Varsity Scripsit and the DeskMate Music/Sound work was Jeff. He was a great guy and one of my favorite memories of him was him playing with the Sound/Music app combo. He wanted to wrap both of them with another app which could run in the stores. If you used them in conjunction you could record a simple sound in Sound (say a person saying “Meow” or making a sound with keys) and then Music could load the recorded sound and play Jingle Bells scaling the single “note” up and down the entire scale. It was pretty funny to listen to and seemed like exactly the kind of thing which, if kept clean, would attract people in the stores. Sadly, I don’t think it ever got built. Maybe I should make an online app for it someday.

One thing to note around this time was that Jeff had hiccups continuously. All the time. He saw doctors about it but nothing they tried helped any. It just made him miserable for a long period.

V

After I graduated from school I went straight to work for Tandy. They had made me a good offer and I worked for them for several years pretty happily. Eventually they built a new “Technology Center” next door to the headquarters and moved us over there. Supposedly they spent $30 mil when $30 mil was a whole lot of money.

I tried not to be much of a trouble maker during my time there but I always had posters up the entire time I had worked at Tandy. In fact, I posted Calvin and Hobbes on the glass of my office every day and people would stop to read it. When I moved to the Technology Center the word came down that there wasn’t going to be any more of that. They had paid good money for the place, it was attractive (not really, it was a big circular cube farm) and it didn’t need posters or anything like that. They were going to select some artwork and post it on various walls and halls throughout the place to make it really nice (they never did).

So I decided to parody one of the multitude of memos we got on topics like this every day. It was really easy by cutting off the top and bottom of one memo, writing my own, and then pasting those sections atop mine and then photocopying the result to have a new memo from management. It explained that they were very happy with the all white/gray/creme motif and that employees would need to start wearing clothes which matched and only clothes which matched. Also, the steady stream of vendors we had coming in to sell us stuff (software and hardware) would be given colored ponchos which matched that they could wear over their clothes so they wouldn’t clash. The last part was the part where I went so ridiculous that I figured everybody would know it was a joke. I don’t think people read that far or if they did, they were humor challenged. Quite a few people took it seriously and several people got very pissed off about that. But nobody ever fingered me as the guy behind it.

VI

I’ve thought about it and most of the projects I worked on while I was there don’t stand out in my mind as particularly interesting until the coming of “multimedia” machines. Tandy had found a source for a CD-ROM that they could start bundling into their machines and selling as an add on for existing PCs that didn’t cost a fortune. Around that one piece, they crafted the idea of the Tandy Sensation! machine (yes, it had the exclamation mark). It was a Windows PC with sound, good graphics, and a CD-ROM built in.

Our CD-ROM burner had cost a fortune and was two big boxes hooked to a PC. After burning innumerable useless discs over the course of our work we eventually figured out that even the slightest amount of work being done on the PC would cause it to screw up the disc. It had to be disconnected from the network and left untouched for the duration of a long burn to generate a disc we could use. That memory pairs with Jeff on the phone with a vendor in Hong Kong trying to get CD-ROM blanks for us to use. They were $50 each and he was trying to figure out how to order 100 of them and get them flown to us in time for them to be useful to us.

I did lots of work on graphics and animation for this machine and it was a lot of fun. Plus, Sensation! sold very well for Tandy. I was told that they sold something like 17,000 units fairly early and that was apparently quite good. Unfortunately, our success with Sensation! set us up to be the go-to people to work on the worst mess I ever saw while working for them.

VII

Philips had brought out their CD-i machine and for some insane reason there were people within Tandy who wanted to copy it. It already seemed to be a clear cut commercial failure. It was too expensive, it didn’t seem to offer any software that people found compelling, and Philips was spending more money marketing each unit than they were making if they actually sold one. Sometimes that happened with video game systems of the time, but they actually sold enough software for it to end up being profitable. CD-i was clearly not doing that.

But none of that dissuaded the people who believed in this project at Tandy. So the Tandy Video Information System (VIS) was born. Here’s a link to information about it at Wikipedia but trust me, it’s fairly dry and in no way conveys how much blood sweat and tears people poured into it nor what a crappy boat anchor it was.

Let me just lead off with this:

I really hope you watched that all the way to the end. It’s hammy, tone deaf, ridiculous in almost every way. I don’t know any engineer who worked on the project, software or hardware side who did not tell them not to do it. I bought a Sega Genesis to bring in to show them Sonic running on the console. It was blazingly fast and nothing, absolutely nothing about the VIS was fast. It was a 286 processor in a box that took forever to start up to run your game/educational program and if you wanted to boot it into Windows then it took forever times forever to do that.

They did focus groups and spent considerable money polling people about what they wanted from such a machine and what they would pay for it. The answer was that they were largely uninterested in it and if they were it shouldn’t cost more than $400. Tandy didn’t think they could sell it for less than $800. That should have stopped them cold but like everything else, it didn’t.

For whatever reason, Microsoft was also invested in this idea too. They had a stripped down version of Windows they imagined would start making its appearance in small appliance like boxes like this. However Windows, even stripped down, was the antithesis of anything you wanted to boot over and over again with cheap processors and no memory. Eventually they licensed it to Tandy for inclusion into VIS for a quarter ($0.25) during a time when Tandy was probably paying $20 to include Windows with their regular PCs. I say Microsoft was “invested” in this idea but the truth is I think they were invested in it the same way a chicken is invested in a ham and eggs breakfast. The problem is, Tandy was the pig. I was told that Tandy spent somewhere around $75 million dollars developing the VIS and it sold handfuls of units (after you figure in all the returns). Eventually companies like Tiger started selling bundles which included every software title ever produced for the machine and I think they still were only selling them at $99.

VIII

I worked for Jeff for many years at Tandy and one day he came by my cube to tell me that he needed to go in and have some surgery. He didn’t make a huge deal about it but it was clear that he was kind of sad. I didn’t think too much about it and I should have asked him to sit down and talk to me. I didn’t.

Next week his boss broke the news that Jeff had pancreatic cancer and after they opened him up on the operating table they just closed him back up and sent him to recovery. He died some hours later.

The hiccups he had suffered with years before had actually been one of several symptoms according to an oncologist who diagnosed him.

He definitely deserved a better version of me than he got. I’m sorry Jeff. I really am.

IX

It wasn’t that much later that Tandy sold its computer business to AST Research. At the time AST was in the top five manufacturers and doing very well. Pretty much everyone who had worked for Tandy continued to work for AST for the next couple of years, initially in the same Technology Center but later in a commercial area on the north side of Fort Worth. I moved on to Crystal Semiconductor with some of my colleagues and eventually poor business decisions caught up with AST.

As I said, my account lacks the pathos (with the exception of VIS) that the other eulogy had but it’s my perspective and I didn’t want the other one to be the only thing everybody heard about Tandy/RadioShack if this is indeed the end for them.