Archive

One of my first computers had just 1K of RAM. That’s enough to store… well, almost nothing. It could store 0.01% of a (JPEG compressed) digital photo I now take on my dSLR or 0.02% of a short (MP3 compressed) music track. In other words, I would need 10 thousand of these devices (in this case a Sinclair ZX80) to store one digital photo!

I know the comparison above is somewhat open to criticism in that I am comparing RAM with storage and that early computers could have their memory upgraded (to a huge 16K in the case of the ZX80) but the point remains the same: even the most basic computer today is massively superior to what we had in the “early days” of computers.

It should be noted that, despite these limitations, you could still do stuff with those early computers. For example, I wrote a fully functioning “Breakout” game in machine code on the ZX80 (admittedly with the memory expansion) and it was so fast I had to put a massive loop in the code to slow it down. That was despite the fact that the ZX80 had a single 8 bit processor running at 3.25 MHz which is somewhat inferior to my current laptop (now a few years out of date) which has four 64 bit cores (8 threads) running at 2.5 GHz.

The reason I am discussing this point here is that I read an article recently titled “The technology struggles every 90s child can relate to”. I wasn’t exactly a child in the 90s but I still struggled with this stuff!

So here’s the list of struggles in the article…

1. Modems

Today I “know everything” because in the middle of a discussion on any topic I can search the internet for any information I need and have it within a few seconds. There are four components to this which weren’t available in the 90s. First, I always have at least one device with me. It’s usually my iPhone but I often have an iPad or laptop too. Second, I am always connected to the internet no matter where I am (except for rare exceptions). Third, the internet is full of useful (and not useful) information on any topic you can image. And finally, Google makes finding that information easy (most of the time).

None of that was available in the 90s. To find a piece of information I would need to walk to the room where my desktop computer lived, boot it, launch a program (usually an early web browser), hope no one else was already using the phone line, wait for the connection to start, and laboriously look for what I needed (possibly using an early search engine) allowing for the distinct possibly that it didn’t exist.

In reality, although that information retrieval was possible both then and now, it was so impractical and slow in the 90s that it might as well have not existed at all.

2. Photography

I bought a camera attachment for one of my early cell phones and thought how great it was going to be taking photos anywhere without the need to take an SLR or compact (film) camera with me. So how may photos did I take with that camera? Almost none, because it was so slow, the quality was so bad, and because it was an attachment to an existing phone it tended to get detached and left behind.

Today my iPhone has a really good camera built-in. Sure it’s not as good as my dSLR but it is good enough, especially for wide-angle shots where there is plenty of light. And because my iPhone is so compact and easy to take everywhere (despite its astonishing list of capabilities) I really do have it with me always. Now I take photos every day and they are good enough to keep permanently.

3. Input devices

The original item here was mice, but I have extended it to mean all input devices. Mice haven’t changed much superficially but modern, wireless mice with no moving parts are certainly a lot better than their predecessors. More importantly, alternative input devices are also available now, most notably touch interfaces and voice input.

Before the iPhone no one really knew how to create a good UI on a phone but after that everything changed, and multi-touch interfaces are now ubiquitous and (in general, with a few unfortunate exceptions) are very intuitive and easy to use.

4. Ringtones

This was an item in the article but I don’t think things have changed that much now so I won’t bother discussing this one.

5. Downloads

Back in the day we used to wait hours (or days) for stuff to download from on-line services. Some of the less “official” services were extremely well used back then and that seems to have reduced a bit now, although downloading music and movies is still popular, and a lot faster now.

The big change here is maybe the change from downloads to streaming. And the other difference might be that now material can be acquired legally for a reasonable price rather than risking the dodgy and possibly virus infected downloads of the past.

6. Clunky Devices

In the 90s I would have needed many large, heavy, expensive devices just to do what my iPhone does now. I would need a gaming console, a music player with about 100 CDs to play in it, a hand-held movie player (if they even existed), a radio, a portable TV, an advanced calculator, a GPS unit, a compass, a barometer, an altimeter, a torch, a note pad, a book of maps, a small library of fiction and reference books, several newspapers, and a computer with functions such as email, messaging, etc.

Not only does one iPhone replace all of those functions, saving thousands of dollars and about a cubic meter of space, but it actually does things better than a lot of the dedicated devices. For example, I would rather use my iPhone as a GPS unit than a “real” GPS device.

7. Software

Software was a pain, but it is till often a pain today so maybe this isn’t such a big deal! At least it’s now easy to update software (it often happens with no user intervention at all) and installing over the internet is a lot easier than from 25 floppy disks!

Also, all software is installed in one place and doesn’t involve running from disk or CD. In fact, optical media (CDs and DVDs) are practically obsolete now which isn’t a bad thing because they never were particularly suitable for data storage.

8. Multi-User, Multi-Player

The article here talks about the problem of having multiple players on a PlayStation, but I think the whole issue of multiple player games (and multi-user software in general) is now taken for granted. I play against other people on my iPhone and iPad every day. There’s no real extra effort at all, and playing against other people is just so much more rewarding, especially when smashing a friend in a “friendly” race in a game like Real Racing 3!

So, obviously things have improved greatly. Some people might be tempted to get nostalgic and ask if things are really that much better today. My current laptop has 16 million times as much memory, hundreds of thousands times as much CPU power, and 3000 times as many pixels as my ZX80 but does it really do that much more? Hell, yes!

There’s a classic British comedy sketch called the “Four Yorkshiremen sketch” originally created for the 1967 British television comedy series “At Last the 1948 Show”. The best way to describe the sketch is to use the description from Wikipedia, which calls it “…a parody of nostalgic conversations about humble beginnings or difficult childhoods, featuring four men from Yorkshire who reminisce about their upbringing. As the conversation progresses they try to outdo one another, and their accounts of deprived childhoods become increasingly absurd.”

It’s one of my favourite pieces of comedy ever, so I think I need to include it here, even though it really has very little to do with my actual subject in this blog post. So here it is (the scene includes four well-dressed men sitting together at a vacation resort drinking expensive wine and smoking cigars)…

Michael Palin: Ahh… Very passable, this, very passable.

Graham Chapman: Nothing like a good glass of Chateau de Chassilier wine, aye Gessiah?

MP: Aye. In them days, we’d a’ been glad to have the price of a cup o’ tea.

GC: A cup o’ COLD tea.

EI: Without milk or sugar.

TG: OR tea!

MP: In a filthy, cracked cup.

EI: We never used to have a cup. We used to have to drink out of a rolled up newspaper.

GC: The best WE could manage was to suck on a piece of damp cloth.

TG: But you know, we were happy in those days, though we were poor.

MP: Aye. BECAUSE we were poor. My old Dad used to say to me, “Money doesn’t buy you happiness.”

EI: ‘E was right. I was happier then and I had NOTHIN’. We used to live in this tiny old house, with great big holes in the roof.

GC: House? You were lucky to have a HOUSE! We used to live in one room, all hundred and twenty-six of us, no furniture. Half the floor was missing; we were all huddled together in one corner for fear of FALLING!

TG: You were lucky to have a ROOM! WE used to have to live in a corridor!

MP: Oh, we used to DREAM of livin’ in a corridor! Woulda’ been a palace to us. We used to live in an old water tank on a rubbish tip. We got woken up every morning by having a load of rotting fish dumped all over us! House!? Hmph.

EI: Well when I say “house” it was only a hole in the ground covered by a piece of tarpaulin, but it was a house to US.

GC: We were evicted from OUR hole in the ground; we had to go and live in a lake!

TG: You were lucky to have a LAKE! There were a hundred and sixty of us living in a small shoebox in the middle of the road.

MP: Cardboard box?

TG: Aye.

MP: You were lucky. We lived for three months in a brown paper bag in a septic tank. We used to have to get up at six o’clock in the morning, clean the bag, eat a crust of stale bread, go to work down mill for fourteen hours a day week in-week out. When we got home, out Dad would thrash us to sleep with his belt!

GC: Luxury. We used to have to get out of the lake at three o’clock in the morning, clean the lake, eat a handful of hot gravel, go to work at the mill every day for tuppence a month, come home, and Dad would beat us around the head and neck with a broken bottle, if we were LUCKY!

TG: Well we had it tough. We used to have to get up out of the shoebox at twelve o’clock at night, and LICK the road clean with our tongues. We had half a handful of freezing cold gravel, worked twenty-four hours a day at the mill for fourpence every six years, and when we got home, our Dad would slice us in two with a bread knife.

EI: Right. I had to get up in the morning at ten o’clock at night, half an hour before I went to bed, eat a lump of cold poison, work twenty-nine hours a day down mill, and pay mill owner for permission to come to work, and when we got home, our Dad would kill us, and dance about on our graves singing “Hallelujah.”

MP: But you try and tell the young people today that… and they won’t believe ya’.

ALL: Nope, nope.

Look for this on YouTube if you want to enjoy it as a video with the Yorkshire accents.

Anyway, the point is that some older people today like to exaggerate how bad things were back when they were young, and comment on how easy people have it today. To a certain extent they are right of course, because some things are easier today than for previous generations, but equally the memories of the past don’t tend to be particularly accurate.

Many things today are a lot harder than in the past. Jobs can be harder to get, pay rates aren’t as generous, security is much lower, the rate of change in required skills is much higher, and general stress and the pace of life are greater. Sure, we have a lot of modern technology which makes our lives easier, but the advantages that science and technology have given us seem to be balanced by the disadvantages brought about by politics and economics.

In general I think the overall direction is positive, and this is shown by global figures indicating lower rates of poverty, famine, violence, and other negative factors. Sure, things could still be a lot better, and as the outcomes for some groups have improved they have worsened for others, but overall things are better.

But that wasn’t really the subject for this blog post. In fact, the subject was how things have changed in the last 30 years for computer geeks. As a geek who started working with computers in the 1980s I wanted to talk about how much easier (and harder) things are today.

Something like: Luxury! Back when I was young we used to have to wind up computer with key, load system with paper tape, took 3 hours… if we were lucky. Then we would type in program from keyboard and when we wanted to send an email ‘ad to catch nearest pigeon and tape piece o’ paper to its leg… etc… well you get the idea.

But seriously, now that I have wasted this blog post talking about a comedy sketch I think I will leave the original subject to next time. So check back in the next few days for an actual discussion on how much better/worse computers are now than 30 years ago. And do go and watch that comedy sketch (there are several versions on YouTube, but my favourite is titled “Monty Python – Four Yorkshiremen”). It’s a classic!

Many people think the internet is making us dumb. They think we don’t use our memory any more because all the information we need is on the web in places like Wikipedia. They think we don’t get exposed to a variety of ideas because we only visit places which already hold the same views as we do. And they think we spend too much time on social media discussing what we had for breakfast.

Is any of this stuff true? Well, in some cases it is. Some people live very superficial lives in the virtual world but I suspect those same people are just naturally superficial and would act exactly the same way in the real world.

For example, very few people, before the internet became popular, remembered a lot of facts. Back then, some people owned the print version of the Encyclopedia Brittanica, and presumably these were people who valued knowledge because the print version wasn’t cheap!

But a survey run by the company found that the average owner only used that reference once per year. If they only referred to an encyclopedia once a year it doesn’t give them much to remember really, does it?

Today I probably refer to Wikipedia multiple times per day. Sure I don’t remember many of the details of what I have read, but I do tend to get a good overview of the subject I am researching or get a specific fact for a specific purpose.

And finding a subject in Wikipedia is super-easy. Generally it only takes a few seconds, compared with much longer looking in an index, choosing the right volume, and finding the correct page of a print encyclopedia.

Plus Wikipedia has easy to use linking between subjects. Often a search for one subject leads down a long and interesting path to other, related topics which I might never learn about otherwise.

Finally, it is always up to date. The print version was usually years old but I have found information in Wikipedia which refers to an event which happened just hours before I looked.

So it seems to me that we have a far richer and more accessible information source now than we have ever had in the past. I agree that Wikipedia is susceptible to a certain extent to false or biased information but how often does that really happen? Very rarely in my experience, and a survey done a few years back indicated the number of errors in Wikipedia was fairly similar to Brittanica (which is also a web-based source now, anyway).

Do we find ourselves mis-remembering details or completely forgetting something we have just seen on the internet? Sure, but that isn’t much to do with the source. It’s because the human brain is not a very good memory device. If it was true that we are remembering less (and I don’t think it is) that might even be a good thing because it means we have to get our information from a reliable source instead!

And it’s not even that this is a new thing. Warnings about how new technologies are going to make us dumb go back many years. A similar argument was made when mass production of books became possible. Few people would agree with that argument now and few people will agree with it being applied to the internet in future.

What about the variety of ideas issue? Well people who only interact with sources that tell them what they want to believe on-line would very likely do the same thing off-line.

If someone is a fundamentalist Christian, for example, they are very unlikely to be in many situations where they will be exposed to views of atheists or Muslims. They just wouldn’t spend much time with people like that.

In fact, again there might be a greater chance to be exposed to a wider variety of views on-line, although I do agree that the echo-chambers of like-minded opinion like Facebook and other sites often tend to be is a problem.

And a similar argument applies to the presumption that most discussion on-line is trivial. I often hear people say something like “I don’t use Twitter because I don’t care what someone had for breakfast”. When I ask how much time they have spent on Twitter I am not surprised to hear that it is usually zero.

Just to give a better idea of what value can come from social media, here is the topic of the top few entries in my current Twitter feed…

I learned that helium is the only element that was discovered in space before found on earth. (I already knew that because I am an amateur astronomer, but it is an interesting fact, anyway).

New Scientist reported that the ozone layer recovery will be delayed by chemical leaks (and it had a link if I want details).

ZDNet (a computer news and information site) tweeted the title of an article: “Why I’m still surprised the iPhone didn’t die.” (and again there was a link to the article).

New Scientist also tweeted that a study showed that “Urban house finches use fibres from cigarette butts in their nests to deter parasites” (where else would you get such valuable insights!)

ZDNet reported the latest malware problem with the headline “A massive cyberattack is hitting organisations around the world” (I had already read that article)

Oxford dictionaries tweeted a link to an article about “33 incredible words ending in -ible and -able” (I’ll read that and add it to my interesting English words list).

The Onion (a satirical on-line news site) tweeted a very useful article on “Tips For Choosing The Right Pet” including advice such as “Consider a rabbit for a cuddly, low cost pet you can test your shampoo on”.

Friedrice Nietzsche tweeted “five easy pentacles” (yes, I doubt this person is related to the real Nietzsche, and I also have no idea what it means).

Greenpeace NZ linked to an article “Read the new report into how intensive livestock farming could be endangering our health” (with a link to the report).

Otago Philosophy tweeted that “@Otago philosopher @jamesmaclaurin taking part in the Driverless Future panel session at the Institute of Public Works Engineers Conference” (with a link).

I don’t see a lot of trivial drivel about breakfast there. And where else would I get such an amazing collection of interesting stuff? Sure, I get that because I chose to follow people/organisations like science magazines, philosophers, and computer news sources, but there is clearly nothing inherently useless about Twitter.

So is the internet making us dumb? Well, like any tool or source, if someone is determined to be misinformed and ignorant the internet can certainly help, but it’s also the greatest invention of modern times, the greatest repository of information humanity has ever had, and something that, when treated with appropriate respect, will make you really smart, not dumb!

I have made a few comments recently on the theme of the “next great change” in society, when we will transition from the industrial age to the information age. I’m sure a lot of people think my ideas are just crazy dreams, and I sometimes wonder whether that is the case myself, but I was interested to see that the famous science historian, James Burke, said very similar things in a recent podcast he was featured in.

Our current society is concerned with distributing resources in an environment of scarcity, controlling the means of production of those resources, and recruiting the labour necessary for production on the best possible terms for the people in control.

The inevitable result of this is a deeply divided society where a tiny fraction of the people get most of the wealth available, and we certainly see that today in the grossly uneven ownership of wealth by the top 1%.

But let’s look at the massive changes which are about to make everything we currently know obsolete. Some of this is my opinion of what will happen in the next 20 to 30 years, and some is from the Burke podcast where he takes a more extreme view than me, but one which might be placed a bit further in the future too.

The basic point is that there will be no shortages. Chemical synthesis and 3D printing will provide any materials needed. Efficient power generation (it’s unclear exactly what that will be, but it could be ultra-efficient solar, improved nuclear such as Thorium, or the ultimate power source: fusion) will provide all the power needed. Robotics will provide all the physical labour. And artificial intelligence will provide the creativity, invention, and overview.

Once a robot is made which can make more robots (of course with small improvements with each generation controlled by an AI) there is no need for a human to ever make anything again. And if the thinking machines (AIs) can design and improve themselves then everything changes because the rate of improvement would inevitably escalate exponentially.

Within a relatively short period of time there will be literally nothing left for humans to do.

And when that happens all out political structures, our economies, and even our value systems will become meaningless.

To many this sounds like a bleak prospect, and I agree to some extent. But what’s the point of resisting something which is inevitable? The Luddites resisted change which they saw as negative – and they were right in many ways – but they couldn’t stop the industrialisation process once it got started.

No doubt vested interests will try to stop these changes, or at least try to maintain control of them, but that just won’t be possible because there will be no point of leverage for them to base their power on. Who cares who has the most money when everything is free?

So getting back to that point about humans having nothing to do: what our role will be will very much depend on how the machines feel about us, because I’m sure that eventually we will no longer be able to control our ultra-intelligent creations.

If the machines decided that humans were pointless maybe they would just eliminate us, and maybe that would be the kindest thing. Or maybe they might find there is something about organic life which synthetic life couldn’t match so it still might have some value. Or maybe they might just want to keep humans around because we are self-aware and deserve a certain level of respect.

I do have to say that if I was an ultra-intelligent machine and looked around at how humans have behaved both in the past and present, I might be tempted to take the first option! Maybe it’s time for us to start behaving a little bit better so that when we are judged by our new synthetic masters we might be allowed to live.

It’s all rather Biblical, actually. Maybe there really will be a judgement day, just like Christianity tells us. But the type of god doing the judging won’t be the one imagined by the writers of any religious text. For a more accurate fictional appraisal of that future we should look at science fiction, not theology!

You are probably reading this post on a computer, tablet, or phone with a graphical user interface. You click or tap an icon and something happens. You probably think of that icon as having some meaning, some functionality, some deeper purpose. But, of course, the icon is just a representation for the code that the device is running. Under the surface the nature of reality is vastly more complex and doesn’t bear the slightest relationship to the graphical elements you interact with.

There’s nothing too controversial in that statement, but what if the whole universe could be looked at in a similar way? In a recent podcast I heard an interview with Donald Hoffman, the professor of cognitive science at the University of California. He claims that our models of reality are just that: models. He also claims that mathematical modelling indicates tha the chance that our models are accurate is precisely zero.

There are all sorts of problems with this perspective, of course.

First, there is solipsism which tells us that the only thing we can know for sure is that we, as an individual, exist. If we didn’t then we couldn’t have the thought about existence, but the reality of anything else could be seen as a delusion. Ultimately I think this is totally undebatable. There is no way to prove that what I sense is real and not a delusion.

While I must accept this idea as being ultimately true I also have to reject on the basis that it is ultimately pointless. If solipsism is true then pursuing ideas or understanding of anything is futile. So our whole basis of reality relies on something which can’t be shown to be true, but has to be accepted anyway, just to make any sense of the world at all. That’s kind of awkward!

Then there is the fact that the same claims of zero accuracy of models of the world surely apply to his models of models of the world. So, if our models of reality are inaccurate does that not mean that the models we devise to study those models are also inaccurate?

And if the models of models are inaccurate does that mean there is a chance that the models themselves, aren’t? We really can’t know for sure.

I would also ask what does “zero accuracy” mean. If we get past solipsism and assume that there is a reality that we can access in some way, even if it isn’t perfect, how close to reality do we have to be to maintain some claim of accuracy?

And the idea of zero accuracy is surely absurd because our models of reality allow us to function predictably. I can tap keys on my computer and have words appear on the screen. That involves so much understanding of reality that it is deceptive to suggest that there is zero accuracy involved. There must be a degree of accuracy sufficient to allow a predictable outcome, at the level of my fingers making contact with the keys all the way down to the quantum effects working within the transistors in the computer’s processor.

So if my perception of reality does resemble the icon metaphor on a computer then it must be a really good metaphor that represents the underlying truth quite well.

There are areas where we have good reason to believe our models are quite inaccurate, though. Quantum physics seems to provide an example of where incredibly precise results can be gained but the underlying theory requires apparently weird and unlikely rationalisations, like the many worlds hypothesis.

So, maybe there are situations where the icons are no longer sufficient and maybe we never will see the underlying code.

I hear a lot of debate about whether the internet is making us dumb, uninformed, or more close-minded. The problems with a lot of these debates are these: first, saying the internet has resulted in the same outcome for everyone is too simplistic; second, these opinions are usually offered with no justification other than it is just “common sense” or “obvious”; and third, whatever the deficiencies of the internet, is it better or worse than not having an internet?

There is no doubt that some people could be said to be more dumb as the result of their internet use. By “dumb” I mean being badly informed (believing things which are unlikely to be true) or not knowing basic information at all, and by “internet use” I mean all internet services people use to gather information: web sites, blogs, news services, email newsletters, podcasts, videos, etc.

How can this happen when information is so ubiquitous? Well information isn’t knowledge, or at least it isn’t necessarily truth, and it certainly isn’t always useful. It is like the study (which was unreplicated so should be viewed with some suspicion) showing that people who watch Fox News are worse informed about news than people who watch no news at all.

That study demonstrates three interesting points: first, people can be given information but gather no useful knowledge as a result; second, non-internet sources can be just as bad a source as the internet itself; and third, this study (being unreplicated and politically loaded) might itself be an example of an information source which is potentially misleading.

So clearly any information source can potentially make people dumber. Before the internet people might have been made dumber by reading printed political newsletters, or watching trashy TV, or by listening to a single opinion at the dinner table, or by reading just one type of book.

And some people will mis-use information sources where others will gain a lot by using the same source. Some will get dumber while others get a lot smarter by using the same sources.

And (despite the Fox News study above) if the alternative to having an information source which can be mis-used is having no information source at all, then I think taking the flawed source is the best option.

Anecdotes should be used with extreme caution, but I’m going to provide some anyway, because this is a blog, not a scientific paper. I’m going to say why I think the internet is a good thing from my own, personal perspective.

I’m interested in everything. I don’t have a truly deep knowledge about anything but I like to think I have a better than average knowledge about most things. My hero amongst Greek philosophers is Eratosthenes, who was sometimes known as “Beta”. This was because he was second best at everything (beta is the second letter in the Greek alphabet which I can recite in full, by the way).

The internet is a great way to learn a moderate amount about many things. Actually, it’s also a great way to learn a lot about one thing too, as long as you are careful about your sources, and it is a great way to learn nothing about everything.

I work in a university and I get into many discussions with people who are experts in a wide range of different subjects. Obviously I cannot match an expert’s knowledge about their precise area but I seem to be able to at least have a sensible discussion, and ask meaningful questions.

For example, in recent times I have discussed the political situation in the US, early American punk bands, the use of drones and digital photography in marine science, social science study design, the history of Apple computers, and probably many others I can’t recall right now.

I hate not knowing things, so when I hear a new word, or a new idea, I immediately Google it on my phone. Later, when I have time, I retrieve that search on my tablet or computer and read a bit more about it. I did this recently with the Gibbard-Satterhwaite Theorem (a mathematical theorem which involves the fairness of voting systems) which was mentioned in a podcast I was listening to.

Last night I was randomly browsing YouTube and came across some videos of extreme engines being started and run. I’ve never seen so much flame and smoke, and heard so much awesome noise. But now I know a bit about big and unusual engine designs!

The videos only ran for 5 or 10 minutes each (I watched 3) so you might say they were quite superficial. A proper TV documentary on big engines would probably have lasted an hour and had far more detail, as well as having a more credible source, but even if a documentary like that exists, would I have seen it? Would I have had an hour free? What would have made me seek out such an odd topic?

The great thing about the internet is not necessarily the depth of its information but just how much there is. I could have watched hundreds of movies on big engines if I had the time. And there are more technical, detailed, mathematical treatments of those subjects if I want them. But the key point is that I would probably know nothing about the subject if the internet didn’t exist.

Here’s a few other topics I have got interested in thanks to YouTube: maths (the numberphile series is excellent), debating religion (I’m a sucker for an atheist experience video, or anything by Christopher Hitchens), darts (who knew the sport of darts could be so dramatic?), snooker (because that’s what happens after darts), Russian jet fighters, Formula 1 engines, classic British comedy (Fawlty Towers, Father Ted, etc).

What would I do if I wasn’t doing that? Watching conventional TV maybe? Now what were my options there: a local “current affairs” program with the intellectual level of an orangutan (with apologies to our great ape cousins), some frivolous reality TV nonsense, a really un-funny American sitcom? Whatever faults the internet has, it sure is a lot better than any of that!

I deal with several larger companies for IT services and products. I buy products from them, I buy services from them, and I get support from them when things go wrong. I also deal with smaller companies, especially for specialised software and other products, and sometime I need support from them as well. I think, after many years, I have noticed some general patterns in the way these larger and smaller companies operate.

Obviously I am just talking about personal experience and anecdotes here, but this is a blog, not a scientific paper, so I’m going to proceed with that understanding.

First, what is it I have noticed?

Well, big companies are sometimes the only choice, whether you like them or not, because there are some products which can only realistically be produced by big corporations, if we operate under our current economic model. For example, if I want to work with computers I really have to buy one from a large corporation. And if I want to work in the Apple world my choices are down to one!

The products these companies produce aren’t necessarily bad, although I believe some of them are, but there is a huge amount of room for improvement. For example, how can Microsoft keep producing such a junk product with successive versions of Office for Mac? It’s hard to imagine how a company with so many resources available can continue to produce such slow, unreliable, ugly rubbish!

Even the good products have serious defects. For example, I really like Apple’s hardware (including the Mac, iPad, iPhone, and Apple Watch, all of which I use every day) but, again considering the resources (and massive amounts of cash) they have available I think they could do so much better.

And that is not so much with the design of the hardware, but the pricing, bundling, compatibility, and other issues. For example, with the new MacBook Pros, why are there no USBC to USBA adapters included, and why aren’t they the same price or cheaper than the previous models?

Another example of these issues peripheral to the main product is licensing. Why is Adobe’s licensing so complicated? Why can’t I just buy a product from them and use it? I can’t, so now Adobe has joined Microsoft as a company whose products I just don’t use any more.

And finally there is the big one: service. The most abysmal, frustrating, pointless service always comes from the big companies. Recently I waited on hold for almost 2 hours with the helpdesk for New Zealand’s biggest telecom company, Spark. And the phone still wasn’t answered so I just gave up. I did manage to communicate with their on-line chat service but that was useless and I got no useful answers.

The worst helpdesk service I have ever experienced was probably with HP. I basically told them what was wrong but they insisted I go through a “check-list” of possible causes before they would try anything else. After an hour of this I agreed to try the things they suggested and call back. After doing this and re-contacting the helpdesk they wanted to go through the list again before they would even listen to the issue. That’s what happens when the helpdesk staff just follow a list of instructions and have no real idea what they’re doing.

On the other hand, small companies I have dealt with almost always provide great service. It’s unusual to even have an issue to resolve, but when it does happen (including licensing issues I had with one product) the problem is fixed almost instantly.

Why? Why do small companies perform so much better than big? Well, I think there are two reasons…

First, big companies (and other organisations) always suffer from communicaitons problems because there are always too many layers between the customer and the people who do the real work. These layers are sometimes bureaucratic – like useless customer service managers – and sometimes structural – like helpdesks run by unskilled (cheap) staff.

I’m not saying every helpdesk is bad, I’m just saying that the good ones are the exception rather than the rule. And I’m not saying every manager is useless… actually I am. In fact, they are worse than useless.

Second, the policies set by big companies come from the wrong people. They come from professional managers (and you already know what I think of them) who have no concept of what is really required and what the customer wants. Instead of reality they rely on instructions from more senior managers, accountants who want to reduce costs, lawyers who just want to avoid legal issues, and that primary source of bad policy: best practice.

If the policies (and those should only be used as guidelines, not absolute rules) in big companies were made by the same people who produce the products and provide the services, and if it was possible for customers to discuss issues with the people who design and produce products and provide services, things would be so much better. But, of course, the bureaucrats aren’t going to give up their influence any time soon.

In summary, I don’t think the problem is Apple, or Microsoft, or Adobe, it’s big business in general. So I try whenever possible to use smaller companies, because I like to support the underdog, because that’s where the real innovation happens, and because that’s often where you get the best deal.