Friday, December 18, 2009

The first man-machine interface between computers and their operators was a row of flashing lights and an array of switches.

It was simple and, for the time, relatively effective.

However, we've come a long way since those early days and their crude interfaces.

Today we've replaced the flashing lights with cool flat-screen LCD displays and the switches have been usurped by the ubiquitous QWERTY keyboard and mouse.

So where to from here?

What's the next big step in making the interface between computers and their users even better?

Well for a long time it appeared as if voice recognition would be a big and important feature for future computers. That belief seems to have fallen by the wayside however, and even though such systems are now available, they're seldom used for anything other than directed telephone menus.

Even the long-awaited touch-screen seems to be a solution looking for a problem these days.

The technology is certainly available and at a very reasonable price -- but it simply hasn't been embraced by either manufacturers or users.

Why is that?

Well a recent survey of mobile phone users found that while touch screens are nice in theory, a surprisingly large number of people currently using the technology would prefer to go back to good old-fashioned keys.

The situation gets worse when you're in a desktop computing situation. Just try pointing at your screen a lot and you'll realise how heavy your arms get. Converting the upright screen into a desktop tablet that sits flat won't do much to help either. Then it will be taking up valuable real-estate and postural issues which can cause neck-fatigue raise their ugly head.

More exciting, and even less practical avenues for man/machine interaction are being touted almost every day.

There are those who believe that contact lenses with built in LEDs will provide "heads-up" display capabilities, allowing computer users to see the output of their machines without even needing a separate screen.

Successful tests have also been carried out with direct neural stimulation which allows the computer to totally bypass your eyes and deliver its output straight into your brain's optical region.

In the other direction, great advances are being made in the area of using thought to control computers. Now while this might be good news for those with physical disabilities, it's not likely to be useful at the office or while gaming at home on your PC.

No, it would appear very much as if we're stuck with our flat-panel screens, keyboards and mice for quite some time to come.

And you know, unless you welcome the prospect of plugging your PC directly into your brain, that's not such a bad thing.

Friday, December 11, 2009

An interesting story surfaced this week in which it was shown just how vulnerable many NZ businesses may be to a form of commercial sabotage performed with the aid of the internet.

According to reports, a Napier florist was able to alter the business details of her competitors as they appeared on the Google Maps service.

The ease with which this sabotage was performed actually beggars belief and represents a sage warning for all those who (whether they know it or not) have a presence on this increasingly useful Google service.

By simply creating a new Google user account, the allegedly fraudulent florist was able to claim ownership of her competitors' details and alter them -- thus derailing attempts by web-surfers to contact them.

Google has warned any and everyone whose businesses appear on Google Maps to make sure they claim and check their listings for accuracy. Once this is done, only those with the correct account and authentication can update the details of a specific business.

Just for fun, I scoped out many of the businesses that are listed for the town where I live and found that although only a small percentage of them had an entry, far too many of those entries were available to be edited by a third party.

And when you check your listing be very careful to look for subtle changes -- such as a single digit incorrect in your phone-number or PO box. The canny saboteur will ensure that their changes are small enough to be overlooked by people in a hurry.

If your business isn't already listed in Google's Local Business Center then you're missing out on some valuable free publicity and promotion so you might want to spend a few minutes and ensure you're available to be found by users of Google Maps.

It always amazes me how many people spend a fortune on advertising yet overlook the massive amount of free promotion you can get online simply by asking.

Yes, it's never been easier to promote yourself to a eager market for free -- but it's also never been easier for a competitor to ankle-tap your hard work -- if you're not vigilant and aware.

Friday, December 4, 2009

If you've been upgrading your PCs regularly, chances are that the machine you're using now is either dual or quad core.

This means there are two or four processors built on the same piece of silicon and, due to their very close proximity and integration, it's possible for suitable software to divide up complex tasks so that parts of the code run concurrently on each core.

When the practical speed limits of switching and bandwidth are reached in any generation of computer technology, the only way to improve throughput is to simply "divide and conquer" by sharing the code between these multiple cores.

A suitable real-world analogy is that when you increase the flow of traffic on a roadway between two cities you can increase the speed at which they travel (but only up to a point) or simply add more lanes to the roadway. CPU gigahertz are the equivalent of the speed limit, cores are the equivalent of extra lanes.

So you'd think that after quad core there'd be eight core and then possibly even sixteen-core processors turning up next.

Well, as seems so often to be the case in the computer world, Intel hasn't bothered just doubling the core-count, they've gone straight to a 48-core processor they're calling the Single-chip Cloud Computer (SCC).

In order to create all these cores on a single chip, over 1.3 billion individual transistors are etched onto the surface of the silicon from which the CPU is formed.

Back in the 1960s, a single transistor cost around US$6. Even if a computer of this scale could have been created from individual transistors back then , the price tag would have been $8 billion. Although Intel hasn't announced a price for the SCC, which is due to be released late in 2010, the odds are that it will be well under US$1,000 even in small quantities.

However, Intel aren't resting on their laurels and claim that they'll soon be able to produce a chip with 100 cores. The current "core-count" record however, must go to the graphics chip manufacturer Nvidia who have announced their intention to build a 512-core chip.

The real question is -- what are users going to do with all this grunt and where will the software come from to take avantage of the enormous level of parallel computing power these new devices will represent.

Designing software for a dual or quad core processor is simple enough -- given that many applications can easily benefit from being spit into several concurrently executing tasks. Harnessing the power of 48 cores however, is a whole lot more complex.

The reality is that in a typical desktop situation, most of the 48 cores would simply be idling -- waiting around for something to do.

The real application for one of these uber-powerful processors will be in the areas of image/video processing, cryptography, virtual modeling and cryptography. Nvidia's 512-core chip for example, will clearly be designed to provide lightning fast graphics processing for its range of display cards.

However, when it comes to general-purpose processors such as the one in your desktop PC, it's only where large chunks of data can be broken up into smaller segments that can all be processed at the same time that the cores of the SCC will be kept running at full speed.

For this reason, your average Windows-based office PC will proably stick with just a handful of cores for the forseeable future. Both Intel and AMD have announced more modest 6-core processors for next year's desktop computers.

Never the less, Microsoft has stated that they're already planning to produce code that utilises the extra processing power that the SCC offers.

Until such time as newer designs (based onspintronics, optical or quantum technologies) make it out of the laboratory and onto the shelves, adding more cores is the easiest way to get more grunt from your boxes so don't be surprised if we see "core-count" being touted as a major selling feature, even if you don't need it.

Friday, November 27, 2009

The battle between Rupert Murdoch and the rest of the "evil news-stealing world" seems ready to reach a crucial point very soon, with the future of free news content hanging in the balance.

Right now we tend to take it for granted that Google is "the" internet search engine and that other players, even Microsofts' much-hyped "Bing" are little more than "also-rans".

While that might be true now, there's a small chance that Murdoch and his publishing empire may join forces with Microsoft to significantly alter the balance of power in the online world.

For some time now, Murdoch has been moaning about the way that Google "steals" the news published by his online news and entertainment websites. The way he sees it, the search engine giant is exploiting his intellectual property for profit without proper reimbursement.

Of course you don't have to be a rocket scientist to realise that if NewsCorp really didn't want Google to index its websites, and thus link to them from its search engine and Google News services, all they had to do was add one small line of code to those pages and they would, as if by magic, be dropped from those indexes.

So why hasn't Murdoch done this already, if he's so worried?

Might it be that he's not as stupid as he seems and is actually well aware that Google's links are providing his sites with valuable extra traffic they might not otherwise get?

Or has Murdoch played a very canny game that has resulted in Microsoft rolling up to his door with its chequebook wide open?

The latest chapter in this evolving saga you see, is that Murdoch is now in talks with Microsoft to provide them with the exclusive right to index his websites.

If NewsCorp and Microsoft are able to strike a mutually acceptable deal then, almost overnight, all of Murdoch's online properties will vanish from Google and only be searchable through Bing.

Now if NewsCorp was some second-rate publisher this wouldn't even be worthy of comment -- but they are a massive player in the news and entertainment fields so if Google loses their content from its indexes, it will actually be quite a blow for services such as Google News.

By the same token, it will certainly send a lot more people to Bing, which is just what Microsoft needs to really make a dent in Google's online dominance.

For that reason, I don't think it's hard to work out that Microsoft may well be offering some significant amount of money to NewsCorp -- an amount that will go some way to stem the rapidly dwindling revenues that all in the news industry are presently experiencing.

So, while commentators the world over have been casting aspersions on Murdoch's ability to understand the internet and even his business savvy, it looks as if he might just get the last laugh this time.

What's more, I doubt we'll see the threatened NewsCorp paywalls -- simply because Murdoch's been smart enough to find a new way to make news pay online.

Or I could be completely wrong -- but it's food for thought, isn't it?

Friday, November 20, 2009

Carbon is the climate-killer element of the 21st century -- however, it's also a potential miracle material that could not only dramatically reduce our energy requirements but also revolutionise the way we build computers and other electronic devices.

While most of us are busy releasing billions of tonnes of climate-altering carbon into the atmosphere every year, there are some dedicated researchers who already claim that carbon will be the basis for a whole new generation of technologies that will significantly improve our lives.

One of the key new carbon-based materials is graphene, effectively a lattice of carbon just one molecule thick.

Graphene displays a wide number of very interesting physical and electrical properties that might represent the future of ultra-miniature hi-speed electronics and high capacity energy storage.

While researchers struggle to come up with more effective battery technologies, increasing attention is being paid to the use of graphene films to create "supercapacitors".

Unlike chemical batteries that store electricity by way of reversible chemical reactions, supercapacitors directly store the electrons that create the flow of electrical current.

In order to store enough electrons to rival the capacity of chemical batteries, a supercapacitor needs an enormous internal surface area on its plates. The thicker the plates, the less area can be crammed into a given volume and the heavier the capacitor will become. Existing capacitor technologies rely either on thin metal foil or film, often separated by an insulating sheet, liquid or even a simple oxide layer on one of those plates.

In such capacitors, the total storage ability is limited primarily by the thickness of the metal plates. By switching to graphene it is expected that the capacitance (therefore the number of electrons that can be stored in a given volume) will be increased by many orders of magnitude.

Within a decade or two, chemical batteries as we currently know them may have all but disappeared.

Some researchers suggest that in their place will be graphene-based supercapacitors delivering a far longer service life while allowing recharging in just seconds rather than hours. What's more, they'll be capable of delivering much higher peak currents, along with significantly higher energy densities.

The important bits of a modern hi-tech piece of consumer electronics such as a netbook computer are presently comprised largely of silicon and lithium. The netbook of the future could well be heavily based on carbon instead.

One thing is for sure with this new carbon technology -- it's unlikely we'll ever run out of the raw material and the more of it we can sequester into a new generation of electronic devices, the better off our climate will be.

Friday, November 13, 2009

One of the most loved and hated pieces of code on the internet today is the Adobe Flash player.

It's thanks to the power that this browser plug-in provides, that services such as YouTube and many others have flourished.

Flash has delivered web designers with a rich ability to create vivid multi-media displays on webpages and also deliver hitherto unattainable levels of interactivity between the page and the person reading it.

Unfortunately, as is so often the case with such complex pieces of software, Flash is not without its problems.

The first problem is what some see as the blatant over-use of this system. Whenever you see irritating ads that immediately start playing video (sometimes with an embarrassing or irritating level of sound), chances are that Flash is doing the work.

What's more, the bulk of those distracting, garish, eye-catching, annoying highly-animated banner ads are also delivered through the Flash player that is part of most browser setups.

However, more recently there has been an even more sinister side to Flash -- and that's the discovery of nasty security holes that make every website using Flash a potential booby-trap for unsuspecting websurfers.

The flaw has the potential to affect any website that uses a Flash applet to allow the upload of files. This would include sites such as GMail and YouTube, both of which rely on Flash applications to perform intelligent upload operations. Fortunately these companies have already taken great care to try and mitigate the problem.

However, by uploading their own carefully crafted malevolent Flash applications to less well administered sites, hackers could effectively then deliver those packages to other unsuspecting internet users with Flash-enabled browsers.

Adobe says that the flaw is not patchable and that the responsibility for ensuring that it can't be exploited lies with the website operators themselves.

In an amusing twist of irony, at least one media outlet is reporting that some of Adobe's own websites are themselves suffering from this very Flash-induced vulnerability.

This leaves the average Web-user in a bit of a quandry.

The only guaranteed way to avoid exposing themselves to the very clear and present danger associated with Flash right now is to uninstall the plug-in itself -- but that would leave many websites that rely solely on Flash for navigation, unusable.

Another option is to install a Flashbocking plug-in so that visitors to an untrusted page can select for themselves whether they enable Flash on an applet by applet basis.

No doubt this latest revelation will again rekindle the debate as to whether Flash has been grossly over-used and abused by webdesigners, something which certainly seems to be the case when so many websites rely totally on Flash applets with no alternative means of navigation.

Even NZ's Official Lotto website falls into this category.

Fortunately, with HTML 5.0 ready for the big-time, Flash may be almost reaching its use-by date and this proprietary system (complete with its security holes) may well fade into oblivion, from where some claim it should never have emerged in the first place.

The truth is that Flash is like alcohol... a little bit of it can make life more fun and does no harm. Over-use however, can lead to a whole bag of misery for all concerned.

Friday, November 6, 2009

One of the amazing things that has been done with modern computer technology is emulation (or simulation).

Computers have become so powerful that they can now easily create images, sounds and movies of objects that simply never really existed -- but in a way that produces total realism.

The most obvious showcase for this technology is the incredibly realistic computer-generated images (CGI) to be found in so many contemporary movies.

NZ's own Weta Workshops made extensive use of CGI in the Lord of the Rings movies and separating computer imagery from reality has now become a difficult task, even for the most expert observer.

We now have the ability to return long-dead or retired actors to the screen by way of CGI and some of the later Terminator movies have apparently done just that with the computer-created image of Arnold Schwarzenegger as the original Terminator robot.

However, all this uber-accurate simulation seems to be creating problems.

It's not an issue with the massive amounts of computer power required to accurately model the 3D images or the complex physics involved -- it's all about lawyers and intellectual property rights.

An example of what may well become a vexing issue in future recently surfaced when a company began selling downloads of popular tracks by The Beatles.

For whatever reason, tracks by the Beatles have never been made available for legal download, so plenty of people, especially EMI who control the rights for the Beatles' albums, became very interested in exactly what was going on.

When challenged, Media Rights Technologies (MRT), the company offering the downloads, responded by saying that they owned the copyrights because these weren't the original Beatle's recordings, they were "pyschoacoustic simulations" of those tracks.

MRT head Hank Risan describes these psychoacoustic simulations as being the "synthetic creation of that series of sounds which best expresses the way a particular melody should be heard as a live performance".

To be honest, this sounds like a load of bunkum to me.

Maybe MRI have added a bit of reverb, noise and delay to the original tracks so as to simulate a life performance from a studio recording -- but it does raise a very important possibility.

It's only a matter of time before programmers to create acoustic models of famous voices.

Once this is done, it will become quite possible to recreate famous bands such as The Beatles and effectively "re-record" those much loved songs.

Although the melodies and lyrics will still be covered by the original copyrights, will the actual "virtual" performance be considered a unique work in its own right and therefore granted separate copyright protection?

What if the "virtual Beatles" band then covers other songs or is even programmed to play entirely new compositions?

Will the current owners of the copyright on existing Beatles' albums have any call on this totally new material?

In another twist, pop group No Doubt are suing Activision, the makers of video game Band Hero over a similar conflict.

The game makers have created a virtual copy of the band (with their permission) and the software allows game-players to have that artificial band play tracks that the real band never actually recorded. This has irked No Doubt members who feel that their virtual personas may end up singing inappropriate songs.

I see a real legal nightmare appearing just around the corner and we can expect many groundbreaking precedents to be set in coming years as reality and simulation go head to head in the courts.

Friday, October 30, 2009

No, I'm not writing about the move to equip long-haul flights with wireless internet connections for passengers and their laptops.

I'm talking about the fact that this week the internet celebrated its 40th birthday.

From some crudely interconnected computers back in 1969 to the now ubiquitous "world wide web", the internet has grown far beyond anything that Leonard Lleinrock ever envisaged when, forty years ago, he and some colleagues started hooking a few computers together.

Of course the first thing that happened with that hookup was that the login application crashed. Thank goodness we now have vastly improved software for without it, the Net would never have gotten off the ground.

Like many things relating to computers, the growth of the net has been somewhat exponential and after that initial 1969 hookup, things progressed only very slowly. In fact, even 12 years later in 1983 there were still little more than 200 computers comprising the whole internet.

One of the major growth spurts took place in the late 1990s, when the ready availability of low-cost dial-up modems and an abundance of ISPs saw increasing numbers of "ordinary people" discovering the delights that were to be had online.

Today it is estimated that there are some 1.7 billion people who use the internet.

But what's this got to do with aircraft?

Well strangely enough, aviation seems to have followed a similar growth path.

Once the Wright brothers established the viability of heavier-than-air flight back in 1903, the growth of the fledgling aviation industry slowly began to grow and expand.

It took almost forty years before commercial passenger flights became commonplace and recreational aviation was an affordable pastime.

If we compare the state of the aviation industry in 1943 with that of the current day, perhaps we get a little insight into where the internet may be headed.

These days, people think nothing of committing their lives and valuables to regular long-haul flights and flying has become just another part of living in the 21st century.

Perhaps the internet will continue to invade our businesses and all aspects of our day-to-day lives.

While some approach cloud-computing with the same trepidation that those early pioneers approached leaving the safety of the ground, it's only a matter of time before we feel confident in committing all our valuable data, information and services to "the ether" that makes upthe internet.

And, just as the aviation industry became a key component of war in the 1940s, it now seems that the internet constitutes a new frontline battlefield -- with cyberwarfare and cyberterrorism "hot topics" within the halls of many nations' defense departments.

Just as recent military conflicts have shown the crucial importance of air-power in war, so we will see online strategies and weapons becoming increasingly important in future battles.

That there are so many similarities between the growth of aviation and the growth of the internet seems to indicate that the human race has a tenacious ability to harness technology and adapt it quickly to the needs of the moment - that bodes well for our future on the planet.

Although most people recognise the internet as a huge step forward in our technological evolution, I'm picking that we have yet to feel the full import of its might. The best is yet to come.

The synergy that the combined knowledge and expertise the Net can bring together may well see huge advances in other areas of science and technology.

Now that we really do have a global community of scientists and researchers who can collaborate over great distances in real-time, look out for even more impressive breakthroughs in a wide range of fields.

The internet is big -- but it's going to get a lot bigger.

Comparatively speaking, we're still using Tiger Moths and DC3s to move our data around. It's only a matter of time before we roll out the cyberspace equivalents of the Jumbo Jet and the Concorde. That's when the real fun will begin!

I just hope I live long enough to see a good deal of the internet's future. However, I've already been asked "what was it like before the internet?" by several inquiring youngsters.

Friday, October 23, 2009

What part of your system is probably the biggest bottle-neck to the overal performance of your computer?

Most people will realise that I'm talking about the hard-drive.

The basic design of the hard drive hasn't changed for decades. It consists of one or more metal disks that spin around at a great number of RPMs (over 10,000 in more recent hi-performance drives) and a number of tiny heads that float less than a hair's breadth above those platters in a way that represents a disaster just waiting to happen.

Imagine a modern jet-fighter flying at the speed of sound just a couple of metres above the ground - well if you were to scale up the internals of your hard-drive, that's the kind of action we're talking about.

And, like any device that contains moving parts, your hard drive will eventually wear out -- that's if the heat generated by its electronics and motor don't kill it first.

This, of course, is why we make backups and use RAID arrays when storing important data.

However, the days of the now ancient mechanical hard-drive are drawing to a close and already we're seeing a family of (netbook) computers that ship without them, yet offer the ability to store gigabytes of programs and data.

Enter the age of the Solid State Drive (SSD).

Thanks to rapid advances in non-volatile memory technology, it's now possible to create solid state storage systems that emulate a regular SATA hard drive -- but without all the vulnerable mechanics or moving parts.

Think of them as an uber-USB drive -- but much bigger and faster.

SSDs have numerous advantages over conventional hard drives and those include far greater reliability, greatly improved access speeds, less vulnerability to unexpected power failures, potentially lower power consumption and faster read/write data rates.

These solid-state drives are also far less susceptible to the effects of physical shock or extremes of temperature and humidity. In effect, they're almost the perfect storage medium.

The only real downside at present is their somewhat higher price than mechanical drives -- but that will also change over time.

Already available in regular hard-drive form-factors and with the ubiquitous SATA electrical interface, it's now possible to perform "plug and play" replacements with almost any desktop or laptop computer, swapping the existing drive for an SSD equivalent of up to 1TB.

So why has it taken so long for SSDs to catch on? Apart from price, what's been holding them back?

Well part of that boils down to a small issue called write-cycle-life or "write endurance".

The problem with Flash memory (which most SSDs use) is that you can only write to it a fixed number of times before the computer equivalent of alzheimer's sets in. Once the write-cycle life of a Flash memory cell is exceeded, it can no longer reliably hold data and the write may either fail or random bit-flipping can occur.

Early Flash memory suffered from a write-cycle life that was measured in just thousands of write operations but this figure has slowly improved and now most good quality silicon has a life measured in millions of individual write-cycles.

Of course you might think that's still an awfully small number, when you work out how quickly computers like to flip bits and how often data gets written to a hard drive -- but things really aren't that bad.

That's because the makers of SSDs have implemented some clever schemes to mitigate the limited write-cycle life of the devices used.

For a start, they often use a level of indirection that ensures the write operations are spread out across the entire array of memory. This means that when you re-write a sector to the SSD, it may actually be written to a totally different part of the memory array to that it originally occupied. This is done so as to spread those write-cycles out and not excessively "wear" any given set of memory locations.

Some pretty sophisticated caching is also done using conventional RAM so that when a sector is constantly being re-written, it's not committed to actual Flash ram until the updates are completed or some other condition (such as a power failure) is signaled.

By use of these clever tactics, the effective write-cycle life of an SSD in regular use already exceeds the MTTF for a conventional hard drive my at least an order of magnitude.

Bearing all this in mind it becomes pretty obvious that the days of the existing whiny, spinning, vulnerable mechanical hard drive are numbered and that your next PC (or maybe the one after that) will almost certainly come fitted with a solid-state drive.

The arrival of the affordable SSD may do more for the effective power of the desktop computer over the next five or six years than enhancements in CPU architecture can contribute. The bottleneck and comparative unreliability of the slow hard-drive is about to be eliminated by some clever silicon.

Cheaper, faster, smaller -- when it comes to computers and storage its the way the future's shaping up.

Friday, October 16, 2009

It would appear that, after a very long gestation period, the e-book reader is finally catching on.

Not only has Amazon enjoyed very healthy sales of its Kindle but a number of other vendors have announced the launch (or imminent launch) of competing products.

Of course this won't spell the end of ink-stained paper any time soon, but it will force some major adjustment by publishers and distributors alike -- an adjustment some may be reluctant to embrace.

The book publishing industry is rapidly facing the same situation that was faced by the recording and movie industries a decade or so ago and it will be very interesting to see if they've learned anything by watching the fiasco that resulted there.

Until now, book publishers have been protected from the effects of the digital age, simply because their wares were not readily available in digital form -- and that meant one very important thing:

The cost of copying exceeded the cost of buying a genuine edition.

This was also the case for music and movies right up until the arrival of CD/DVD burners and the internet. Sure, you could borrow a friends LP or CD and copy it to cassette tape but that was a slow process and the second-generation copy was always inferior to the original. It was only when the original material was provided in digital format and true bit-for-bit copying was affordable that any real threat was posed to the (then) existing business models.

Well now we have the emergence of truly practical e-book readers and Google is well on the way to digitising all existing paper-based publications, the book publishers will soon be in exactly the same boat.

If publishers don't adjust and revise their pricing to reflect the much lower cost that illegal copying represents to their digital volumes, they will lose enormous amounts of money to piracy.

Right now, copying a best-seller would involve hours of photocopying or scanning -- something that's just not worth it for a $20 book. Once that book is already available in digital format, copying will be as simple as firing up suitable software to circumvent the DRM and then write the resulting "ripped" file to disk. Once "freed", it would only be a matter of minutes before such books became available for "free" download on various P2P networks -- as much music and video content already is.

So how can publishers compete?

Well they could simply refuse to release works in digital format -- but that would be like the music industry sticking with cassette tapes and the movie industry sticking with 16mm film. It simply would not work because people demand the many benfits that digital media provides.

The reality is that if book publishing is to survive as an industry, they have to significantly drop their prices -- to the point that it remains more attractive to buy than to steal.

Given that "going digital" will save the publishers a huge amount of money by eliminating printing and transport costs, while also allowing them to sell directly to the customer, I would expect to see the price of a digital-edition to be a tiny fraction the paper-edition commands.

I also suspect we'll see a shift away from the "buy this book" model towards commercially operated libraries where, for a fixed monthly or annual fee, you can borrow digital editions of any book you want.

Providing the price is low enough and the process is simple enough, this model may be the single greatest hope the industry has for its continued survival.

People value their time and if they can simply log into their e-library then search, using friendly software, for the title they want and download onto their e-book reader all for a fixed flat-rate fee then few will bother with the hassles of P2P and the legal, logistical and security risks such systems pose.

e-Book readers with wireless connectivity would then be the fantastic equivalent of a doorway to the world's largest library of book titles.

It now becomes very simple to see where Google is headed with its service doesn't it?

Friday, October 9, 2009

If ever there was a time for individual net-users, system administrators and those who set IT policies within larger entities to be vigilant and pro-active, now is it.

This week alone has seen a wave of vulnerabilities, alerts and news reports that show just what a hostile place the online world has become.

As well as the surprisingly little-publicised browser vulnerability that affects the safety and security of SSL connections, the wires have been flooded with stories about the ease with which phishers managed to secure the login IDs and passwords of Yahoo, Hotmail and GMail users.

Then there was word of a new critical vulnerability in the ubiquitous Adobe PDF reader. Using a suitably constructed document file, hackers can gain control of a user's computer -- enabling all sorts of nefarious new code to be uploaded so as to turn that machine into a spambot or to install trojans, keyloggers or other malware.

One of the astounding revelations that has come from the Hotmail phishing exploits is the weakness of passwords some people have chosen.

According to reports, the most common password was "123456" (or some other run of sequential digits) -- something that would send a shiver down the spine of any half-decent system admin or security consultant.

Is this recent hike in hacks, vulnerabilities and successful phishing exploits just an anomaly or does it point to a dangerously poor understanding of basic security procedures within the online community?

There is some good news however. A lack of funds now need not be a hurdle to equipping your PC(s) with anti-virus/anti-malware software. Microsoft's new security suite has been given fairly positive reviews. On the other hand, despite planning to release a bevy of patches next week, the software giant still hasn't done anything about the gaping hole in its SSL code.

If this trend continues, and there's no reason to believe it won't, it really is time for all computer users to take a good long look at the measures they have in place to protect their systems and the data entrusted to them.

Basic commonsense such as using strong passwords that are changed at regular intervals is a great start. The installation and maintenance of reliable anti-virus/malware systems is also critical to security.

However, as software vendors continue to patch their holes, it's increasingly important to educate users as to the many different kinds of "human engineering" that criminals are now resorting to.

The fact that the wife of FBI director Robert Mueller has banned him from using the internet after a "close call" with a phishing scam is proof that we should never assume that anyone knows how to recognise and avoid such attacks.

Friday, October 2, 2009

One thing is a given: that all our modern electronic devices require electrical energy to function.

While many of our computers and other hi-tech appliances are happy to hook up to the mains, almost everything (even that desktop PC) we use these days has a battery of some kind in it.

Way back in the dim dark days of electronics, portable devices were powered by stacks of small zinc-carbon cells. Even before the arrival of the transistor, portable electronic equipment would run (albeit not for long) on these simple batteries.

With the arrival of solid-state technology, the battery requirements of portable devices was significantly reduced. No longer was it necessary to have two batteries (one for the filaments and another for the hi-tension supply). A single 9V battery or assortment of AA, C or D-sized cells would suffice.

However, the chemistry inside these batteries was still predominantly zinc/carbon and they were primary cells (not rechargeable). The single largest cost of ownership with these old devices was the price of regular battery replacement.

Tired of the endless expense of primary cells, new forms of secondary (rechargeable) cells began to appear, the main one using a nickel-cadmium chemistry. Although cadmium is a toxic and environmentally harmful material, this was in an era long before such things were an issue so NiCads (as they became known) proliferated and took portable devices to a new level of convenience and cost-effectiveness.

Since then we've seen numerous advances in electronics and battery technologies until now we have very low-drain, large-scale, low-voltage, low-power CMOS integrated circuitry that consumes comparatively miniscule amounts of electricity compared to earlier gear.

The state of the art in battery technology is now based on lithium chemistry. Lithium-Ion, Lithium-polymer and Lithium-iron-phosphate cells are not only far less environmentally unfriendly but also offer much higher energy densities.

But where to from here?

While there has been much research in the area of supercapacitors which use nano-technology to store electrical energy as a surplus and deficit of electrons rather than through a chemical reaction, researchers confirm that this is still technology in its infancy.

Fuel cells have been promised for decades but, despite regular announcements, even from companies as highly respected as Toshiba, we still don't have those promised Vodka-powered laptops yet. Despite working well in the lab, there are clearly a raft of issues associated with large-scale manufacture of such fuel cells so don't hold your breath.

One idea that has been mooted in some circles recently is the nuclear battery.

Yes, that's right -- a small capsule of radioactive material that can generate electricity with no moving parts.

While this might sound like a giant step backwards and a potential source of hideous environmental danger, that might not be the case.

Thanks to the much lower power demands of many modern portable electronic devices, the amount of radioactive material required may be quite small-- little more than was once used regularly to create the luminous dials on clocks and wrist-watches back in the 1950s.

It's also worth nothing that more than a few of the unmanned spacecraft we've sent on missions far from earth have had nuclear batteries onboard. They're not a new concept but one that may see a revival if research is able to significantly reduce the amount of nuclear material required.

Or perhaps the nuclear battery will be a solution looking for a problem.

What manufacturer would want to try and sell a battery that lasts for decades rather than weeks? Unless you charge a fortune you'd soon saturate the market and sales would dry up.

And then there's the issue of fashion and the relentless advance of electronics technology.

Take the humble cellphone to see what I mean...

These days, people decommission or throw away perfectly good, fully functional mobile phones at regular intervals -- simply because a better, smaller, more powerful replacement comes on the market. What is the point in supplying a battery that lasts 10 years without a recharge when the item it is designed to power might have an effective life of just a year or so?

Nuclear batteries will re-appear but it's unlikely we'll see them in consumer electronics -- they just don't make commercial sense.

Likewise, you won't see cars powered by nuclear batteries -- that really would require too much radioactive material and definitely would pose an environmental/safety problem.

However, I would not dismiss the chance that nuke-powered watches and clocks may appear in the not too distant future -- no backlight required :-)

And let's be honest, wouldn't it be marvelous if you never had to worry about your cellphone or laptop battery going flat again?

Friday, September 25, 2009

Have you ever gone out to post two important letters and, just moments after dropping them into the mailbox, wondered if you might have put the wrong letter in the wrong envelope?

Fortunately there's usually no harm done if you've been careless enough to make such a mistake and the unintended recipients will, most of the time, simply return the incorrect contents to you or at least give you a call to let you know of your mistake.

It's far less likely that you've ever accidentally included a full list of all your customers' account details in the invoices you mail out each month. That would be a very difficult mistake to make -- the added bulk of the extra pages and the resultant "fat" envelopes would instantly be reason to investigate and discover your error.

However, in the e-world, such awful mistakes are not so easily spotted and are often far easier to make.

The very power afforded by the internet to attach files and fire mass-mailings off to huge lists of addresses also make it easy to create the most monumental stuff-ups, as a UK ISP found to their embarrasment recently.

Someone at Demon Internet accidentally attached a spreadsheet containing the details of thousands of other customers to the billing emails sent out to a thousand other users.

Included in the spreadsheet was sensitive information such as customers' login names and passwords. What a right-royal stuff-up!

Outside of the obvious stupidity of keeping plaintext passwords on file, it's clear that this is the kind of "human error" that could happen in any workplace; an error that is made all the easier thanks to the point and click ease with which we dispatch our communications in the e-age.

It's not just accidents that can expose sensitive data either.

I've lost track of the number of emails I've received where the sender has opted to include every recipient in the "To" field -- effectively publishing that list to all recipients. Any company that does this may be effectively handing their competitors an extremely valuable list of prospects.

From a management perspective, the e-age brings in a whole new issue of information security that goes beyond firewalls and anti-virus software. It becomes crucial to ensure that all staff have a full and thorough understanding of just how applications such as email work and what risks are involved.

It's easy for someone to claim to be familiar with email on their CV but have they ever used *your* chosen email client to do send messages to multiple recipients using a list of email addresses? If not, you could be in big trouble.

Even something as simple as developing a fool-proof file-naming and filing system for important documents becomes important. Unless files are stored in carefully categorised folders and named clearly and appropriately it becomes all-too-easy for a tired worker to accidentally send out a copy of your cost-prices to a retail customer -- or a customer quote to a potential competitor.

In the 21st century, 'e' stands for 'easy' but that also means its 'e'asier to make really bad mistakes if you take your eye off the ball.

Friday, September 18, 2009

New Zealand TV viewers and internet users discovered yesterday that when Tivo launches in this country later this year, they may have to change ISPs to get the full benefits.

Although Tivo can be used like a regular PVR, one of its strengths is the fact that it can also download material over the internet.

Instead of being restricted to broadcast content and programming, Tivo users will be able to hook their boxes up to the internet and suck down all manner of additional material -- some free, some "pay per view".

So far so good.

However, in a deal announced yesterday, TVNZ has opted to form an alliance with Telecom for the delivery of this extra content through the internet.

The alliance is critical to the success of this extra feature because, as we all know, our broadband accounts are not "all you can eat".

Many plans are capped, hitting users with fairly stiff "over-cap" charges when they exceed their monthly allocations or forcing them to suffer the indignity of significantly reduced speeds if they're considered to be exceeding a "fair use".

Under the terms of the deal with TVNZ, Telecom will allow customers to download as much content onto their Tivo as they want, without that traffic counting towards their monthly consumption.

This might sound like a great deal for existing Telecom broadband customers, but what about other ISPs? How can they compete?

Well I suspect they can't

Since the vast majority of the broadband infrastructure around NZ is owned and operated by Telecom, only they can afford to deliver masses of extra data without facing massive additional costs. Any other ISP that simply resells Telecom's DSL service will be unable to match Telecom's "all you can eat" service and may lose customers as a result.

Even those ISPs that have their own DSL infrastructure will find the going hard, due to the very limited coverage their own equipment provides.

All is not lost however...

If/when the government rolls out our own Nation-wide Broadband Network, Telecom's strangle-hold on the DSL infrastructure maybe broken (or at least weakened). It's up to politicians whether they roll out a network that is truly free of commercial bias and dominance but they would be foolish not to take this chance to break Telecom's monopoly.

In the meantime, if you want to get the most out of your Tivo, you may find yourself having to change ISPs.

Isn't that anti-competitive?

And we can only wonder what effect all those Tivo users will have on backhaul capacity that is already saturated in some areas. Will Tivo kill the internet for the rest of us as thousands of Kiwi Tivo users become the new data-leaches?

Friday, September 11, 2009

As technology advances, most of us are faced with a bewildering number of IDs and passwords that must be remembered in order to access such important things as our internet account, our email, our online banking, our VOIP service, our laptops, etc, etc.

Unfortunately, the human brain is not perfect and, especially as we age, it's prone to forgetting some very important things, including passwords.

As a result of the limitations of our memory, many people opt for simple, easy to remember passwords that can often be trivial to crack.

Even worse, some folks simply choose a single password for all their authentication activities. This means that if their password is compromised, it becomes possible for any evil little sod to assume their identity across a wide range of services.

So what is the solution to this problem?

How can we use technology to provide a universal authentication system that can prove a person's identity and thus restrict access only to those who are properly authorised to access a service?

And how could such an authentication system extend beyond the virtual world into the real one?

Already most of us have to carry multiple forms of ID. We have a driver's licence, passport, credit-card, FlyBuys and any number of other authentication documents. Surely it would make sense to do away with all this unnecessary duplication and switch to a universal identifier?

Biometrics are one option but have proven to be less reliable and more easily duped than proponents had hoped, thus compromising their practical application.

So what about an embedded RFID chip?

It works for cats, dogs and palm trees so why not for humans?

Already some nightclubs have experimented with such things, allowing members or VIP patrons to gain free access and have drinks automatically debited to their accounts without the need to present any other form of ID.

If we were all to have an RFID chip embedded in a part of our bodies that was not vulnerable to unauthorised scanning, we could leave all those other documents at home. No longer would you have to worry about being fined for failure to carry your license when stopped at a checkpoint while driving down to the beach for a late-evening swim in mid-summer, wearing just your togs and a towel.

Even better, you could authorise any transaction (online or real-world) by simply placing your hand on an RFID energising pad.

Imagine how much efficiency this would add to such mundane things as buying your groceries. No longer would you need wait for the old lady in front of you who is confused about which way to swipe her card or who can't remember her PIN.

It would also provide the universal identifier and authentication sought-after by governments for the safe use of their e-services.

As people, we already carry the universal marks of vaccinations on our shoulders and have already yielded to the pressure to adopt such Orwellian mechanisms as electronic passports, so why not take the next logical step and go for RFID chip implants?

Would the benefits outweight the concerns?

Would the savings outweight the costs in terms of privacy and human rights?

We have the technology, all we need is the resolve.

Or, would this be seen as validation of the Bible's predictions and considered "the mark of the beast" by too many people?

Friday, September 4, 2009

What do you think would happen if I decided to make copies of every piece of music ever recorded and place it on the internet for people to sample and download?

Do you think I'd be allowed to do this?

Would authorities just say "that's okay, carry on"?

What would the recording companies have to say about it?

Do you think it would make any difference if I said that I was only going to allow people to listen to about 20 seconds of each track but, if they wanted to buy it I'd sell it to them for a price I felt was fair -- and send 60% of that money to the artists who wrote and performed the track?

Would that be fair or even legal?

Should those artists have to give their permission for me to do this or should I be allowed to do it anyway and only withdraw their music if they filed a law-suit against me?

And what about those artists whose work I'd already put online? should I be allowed to offer them some paltry amount (say $50) to make amends for the infringements to their intellectual property rights so far?

Well I think the answer to the above questions are not hard to work out.

Anyone who tried this would find themselves sued to oblivion and back, penniless and probably facing prison-time.

But, if you replace "music" with "books" and introduce Google as the company copying and offering these copyrighted works online you'd be looking at exactly the situation as it exists with Google Books.

Why should books be treated any different to music?

Why are Google effectively riding rough-shod over the rights of writers, authors and book publishers that they would not dare to do to recording artists and studios?

Take YouTube, another Google property for example...

When Warner Music threatened to sue them for carrying music tracks and videos without expressed permission, YouTube pulled all those videos and is now very active at policing the uploading of new potentially infringing content.

So why are they now saying "we're going to publish everyone's books online", pretty much without regard to the intellectual property-rights of those who wrote them?

Could it be because the recording industry is a lot better organised and has a lot more money in its war-chest than the publishing industry does?

Google have offered to sweeten the pot for writers by acting as a sales portal for their works and, as an author, I think their deal is pretty damned good. However, their "take it or leave it" approach to this does leave a sour taste in the mouth.

And of course Amazon, who possibly have the most to lose if Google starts selling e-versions of popular books, is outraged. They're bouncing off the walls, furious that Google's move may well ankle-tap their own e-Book initiatives in a way no other company ever could.

If Google gets away with this, I'm hoping that it will set a precedent that may tempt them into launching "Google Music" - where recording artists will finally have a useful and profitable way to sell their wares directly to the public in a way that offers great value to everyone.

Friday, August 28, 2009

Nano-this, nano-that; it seems that every day there is a new bunch of headlines on the wires that extol a new use for nano-technology.

Even here in New Zealand we're seeing nanotech headlines making the front page. Today for example, we read that a Victoria University student, John Watt has been named the MacDiarmid Young Scientist of the Year for his work in nanotech.

Unlike some esoteric theoretical application, Watt's work has come up with a practical way to significantly reduce the amount of Palladium needed in vehicle catalytic converters.

So what is so important about nanotech that makes it such an exciting new frontier of science and technology?

Well the answer may well be quite a bit more exciting than first meets the eye.

We live in a universe that exists on two basic scales: the Newtonian scale and the quantum scale.

The only scale we're directly aware of is the Newtonian one. This is where all those simple laws of physics apply. Mass, acceleration, action/reaction, and the concepts that we see in effect every day are easy enough to understand and we've built most of our existing technology using these basic laws of physics and chemistry.

The quantum world is a whole different kettle of fish however, and a realm into which we've only just begun to move.

Because the quantum world deals with the basic building blocks of matter (electrons, protons, neutrons, photons and the like) it's very difficult for us to control and observe what's happening. The very act of "taking a peek" at a photon will sometimes actually change its behaviour.

If you think of the quantum world like a billiard table, the only way we can tell whether there are any balls on the table, which direction they're traveling and how fast they're moving is to roll another ball across the felt and see if it is deflected by hitting another. Unfortunately, as you can imagine, when the ball we roll hits another, that ball will itself be deflected by the collision so although we know where it *was*, we no longer know where it is.

This is why physicists use terms such as "probability" when dealing with objects at a quantum level. We don't know for sure what the position and speed of a quantum object is, we only know what the probability is that it'll be in any particular place at any particular time.

But that's enough of basic quantum theory -- there are plenty of resources on the Net if you want to learn more (hint: YouTube is your friend).

The exciting thing about nanotechnology is that when you make things small enough, they stop strictly obeying Newtonian laws and start to be affected by quantum laws. Given that quantum laws can be significantly different to Newtonian ones, incredible new doors open to scientists and technologists.

Things that are impossible on a Newtonian scale become possible at a quantum scale so we are now realising a fascinating world of potential advances in technology.

While the current range of practical applications for nanotech are still limited by the technology we have for making such materials, the future is looking incredibly promising.

For example, quantum physics has been mentioned many times in respect to encryption and security but here's an interesting twist on the use of nanotech in a more traditional form of data-hiding.

Keep a watchful eye on the advances being made in the world of nanotech, you might be surprised at what pops up.

When you lay down your hard-earned cash to buy a device like this, it's reasonable to expect that you have control over that piece of electronics. It's up to you what software you load and run on your computer. It's up to you to choose what applications your mobile phone runs. It's your choice as to which eBook titles you load into a reader and which you delete.

Well that's the way it's supposed to work but, as recent events have shown, manufacturers seem to want the final word when it comes to your right to choose what you do with the products they make.

The best example of this is the recent remotely commanded deletion of an eBook file on the Amazon Kindle eBook reader.

People who'd paid good money to load the iconic George Orwell novel 1984 onto their Kindle woke up one day and found that it had mysteriously vanished.

No, it wasn't a hardware of software fault that caused their "bought and paid for" eBook to disappear, it was Amazon's doing.

On July 17th of this year, Amazon commanded Kindle readers to delete two legitimately purchased eBook titles from any reader onto which they'd been loaded, replacing them instead with a store-credit.

Amazon claim that this was because the publisher who'd sold them the eBook version of these titles had no legal right to do so and therefore they were simply enforcing the original copyright holder's rights, through the unannounced deletions.

Tell that to the unfortunate student who'd annotated his copy as part of an assignment, only to find all that hard work effectively lost forever through no fault of his own.

And then there's the iPhone...

As demonstrated recently (and commented on right here in this column) Apple have conceded that there is the same kind of functionality built into the iPhone. They claim that it's a safety measure that will only be used in the event that some kind of malevolent or destructive application is found to be installed on the phones. By reserving the right to delete or disable any application on any or all iPhones, Apple says it's doing us all a favour.

Really?

Now, if these remotely activated "rights to delete" are not abused it's possible to argue a case for their inclusion in our modern electronic devices. But what if they're misused, solely for the commercial benefit of the company who makes those devices?

And what right do these companies really have to say what you can and can't do with your legally purchased devices?

Worst of all, what happens when these "back doors" are compromised by some clever hacker?

How can we be sure that important information or applications won't be deleted at will or at random by some hacker group, who then demands huge sums to confer immunity only to those who are willing to pay-up when blackmailed in this way?

Food for thought.

Should manufacturers have a right to over-ride a consumer's choices and actions when they purchase a hi-tech electronic device or should laws be passed that protect the individual's right to ultimate control?

Thursday, August 13, 2009

Sometimes it's hard enough finding intelligent life right here on earth, let alone somewhere in the vast expanse of the universe.

Despite early hopes, we've pretty much come to the conclusion that earth is the only inhabited planet in our own solar system, so now we have to look even further afield to seek out aliens who might want a chat.

While programmes like SETI quietly turn an ear to the skies and churn through almost unimaginable amounts of data for signs of intelligence, most of us just sit and wait.

Well now you don't have to wait.

If you've got a few spare moments you can redirect your web-browser to HelloFromEarth.net and send an interplanetary SMS to Gliese 581d, a "super-earth" planet orbiting the low-mass red dwarf star Gliese 581, from which it clearly gets its name.

This planet is some 194 trillion km from earth so if you get annoyed with the fact that Vodafone and Telecom sometimes don't deliver your SMS messages until hours after you send them, perhaps this isn't for you. The delay between when you send your message and it finally arrives at its destination will be a rather significant 20 years.

This is of course, little more than a publicity stunt, but a good one.

The odds that this distant planet is home to intelligent life would seem remarkably slim and even if it were, how do we know they'd be listening.

Even if they were listening, how do we know they'd understand us?

Even if they did understand us, would you even remember what you'd originally said when the reply finally came back, another 20 years later?

Never the less, those who have organised this little communications exercise seem to think that only messages in English (and suitably vetted for offensive language) should qualify for transmission.

Of course there are those who are strongly opposed to the trivialisation of the search for alien life. There are also those who fear that this kind of exercise may be the equivalent of ringing a dinner-bell, inviting malevolent hungry aliens to descend and devour earth's inhabitants.

Alas, the reality is almost certainly that, long before your SMS reaches any distant planet, it will have withered and faded until it is no longer discernable above the hiss that is the background noise of the universe itself.

You some how have to wonder whether there might be a better and more immediate use for this technology, don't you?

I also wonder why it is that I can send an SMS to a planet 20 light-years away for free but I still have to pay Vodafone or Telecom 20 cents to send one across the room to my other mobile. What's more, some of those terrestrial TXTs simply vanish, never actually making it across the shagpile at all. Who knows, perhaps they've been abducted by aliens.

Thursday, August 6, 2009

One of the most exciting and practical applications of computer technology in the 21st century is that of three-dimensional printing.

Almost any shape or design that can be conceptualised then converted into a suitable CAD file is now able to be printed out by the amazing devices that are 3D printers.

Right now, these printers are practical but incredibly expensive, not only to purchase but also to run and, as far as I'm aware, there aren't any third-party 3D printer refill operations around to reduce those costs.

Another drawback of today's 3D printers is that the resulting product is generally formed from a kind of plastic. In many cases this plastic may well be strong and durable enough but all too often, the items being printed really need to be made from someting with different physical, electrical and thermal properties.

In such cases, it's normal to use the printed item as a "plug" used to make casting molds from which the final product will be created. Unfortunately this means extra work, time and expense.

There are a growing number of "bureau" printers appearing on the market however, and some of these effectively allow you to print your 3D item in whatever material you choose.

By simply submitting the CAD file and material specifications, you end up with exactly what you asked for, usually just a few days later. Of course, as is always the case with computers -- what you ask for may not be what you want.

Streamlining this whole process is a new version of 3D printing which actually prints using metal instead of plastic. This system progressively deposits layers of sintered steel that, once the printing process is complete, are fused together with heat to create a solid metal object. This is an important step in the development of 3D printing technology.

Intricate metal assemblies that may previously have taken many hours of machine-time and work by skilled engineers to create, can now be printed out in a few short minutes.

Could it be that the computer is about to replace the blacksmith yet again?

Once we have truly affordable multi-material 3D printing, the "replicator" devices of Star Trek fame may become more fact than fiction.

If you examine just how quickly advancing technology brings down prices, it's not at all unreasonable to expect 3D printers will become as ubiquitous as the home-PC within a decade or so.

Such machines could also change the online shopping experience forever. Instead of consigning factory-made items to the postal service for delivery, your online purchase could consist of little more than a download that was sent to your very own 3D printer. Within minutes of ordering, the item would be spat from your printer -- bright, shiny and ready to use, each and every component having been "printed" to precise specifications and in-situ.

Thursday, July 30, 2009

Apple has created a pretty good earner for itself by carefully controlling the software users can download and install on their iPhones.

By vetting, approving and controlling the installation of only certain bits of code, Apple can take a clip on the ticket of every "authorized" application users choose to install.

This concept has obvious benefits for users. For a start, they can download new bits of code with reasonable confidence that they're free from malware and meet at least minimum standards of quality and performance.

On the downside, it makes the applications themselves more expensive and limits a user's choice to only those apps Apple itself chooses to allow. This was highlighted recently when it refused to allow Google's Voice product to be installed on its phones, for fear this would upset AT&T by adversely affecting its revenues from regular calls made by iPhone users.

Some users have decided they don't want Apple's imposed software censorship controlling what goes on their iPhone however, and have opted to "jailbreak" their mobile.

Apple have struck back by claiming that this practice threatens to crash the entire mobile phone system because it opens the doors to untested or malevolent applications that could play havoc with the network infrastructure.

A suitably "hacked" iPhone, Apple says, could create an effective denial of service attack by overloading a celltower, effectively bringing it to its knees.

Advocates of jailbreaking claim Apple is simply trying to spread fear, uncertainty and doubt (FUD) with these claims. They point to the fact that android-based phones ought to pose the very same threat but that nothing bad has happened.

Indeed, when challenged, Apple seem unable to provide any proof that such attacks have ever taken place as a result of a phone that has been jail-broken.

Unfortunately for Apple, the sheer mass and brainpower of hackers will always overpower the attempts of any manufacturer to keep their products "locked" and under central control. It's only a matter of time (if it hasn't happened already) before users can jailbreak their own iPhones and install whatever software they choose -- without the need for Apple's blessing or any kind ofpayment.

No doubt a proliferation of jailbreak software will create a degree of mayhem as unsuspecting iPhone users end up loading their mobile with all sorts of malware that masquerades as a useful but uncertified application.

At least now, the choice is one that can be made by the iPhone user, rather than a manufacturer who sees the certification and distribution of applications as a major part of its revenue stream.

This may not see mobile networks crumbling under DOS attacks but it may spawn fertile new ground for evil little sods to install trojans, backdoors and viruses on iPhones. Only time will tell.

Thursday, July 23, 2009

Here in GodZone, NZ Post has announced that it will be opting to use Google's email and messaging "cloud" rather than in-house Microsoft applications and I think this is the start of a trend that will continue to grow in coming years.

Why go to all the hassle of running your own mailserver with the attendant issues of hackers, spam, hardware and software maintenance, etc -- when you can effectively outsource this to a big player like Google.

Google has the bandwidth, it has the servers, it has the software and it has the technical nouse required to ensure maximum uptimes.

Then we have other companies that are considering shifting far more than just their email and messaging services into the cloud, which is what cloud-based service-providers like Xero are relying on.

One vision for the future is that we go back to what is effectively the old thin-client model that was promoted many years ago for the LAN. Why burden indivudual workstations with the overheads of database management, transaction processing etc, when you can offload all that to a very powerful central server or server-farm?

Doesn't this sound like a wonderful scenario...

No more on-site backups, no more worrying about protecting yourself from disaster, either natural or manmade. No more having to regularly update your applications with downloaded patches, massively reduced system administration costs, etc., etc.

But there is one very important fly in the ointment that is the cloud.

What happens when the cloud actually carries a little (or a lot) of rain?

Yesterday morning I tried to log in and check my YahooMail.

All I got was a DNS error. Apparently the DNS entry for one of Yahoo's mailservers had disappeared and as a result, my connection to that particular cloud was severed.

Fortunately this only lasted a few minutes and I don't commit critical communications to a free email account so there weren't any beads of perspiration on my brow.

But what happens when there's a major outage and it's a mission-critical application such as your order-processing or accounting system?

What do you do if the Southern Cross data cable fails and all your offshore-based cloud applications are no longer within reach?

A smart player would make sure they regularly refresh local copies of their cloud data -- but isn't that just as much of a hassle as doing your own backups on an in-house server? Wasn't the outsourcing of this kind of admin one of the big selling features of the cloud?

And, if your business suffers significant losses due to such an outage, where do you go for compensation?

Your ISP won't be interested in providing such compensation. Chances are that they provide service on a "best effort" basis. If you use Xtra/Telecom you might qualify for a $50 phone card or a month's free internet but that's about it.

Even if you try to claim compensation from the operators of the Southern Cross Cable you'll likely get a cold reception.

So it seems that cloud computing has the potential to save SMEs and large organisations a huge amount of money. Unfortunately, it also has the potential to cost them even more.

Before welcoming the cloud with open arms, it might pay to check up on just what your legal position is in respect to compensation if/when things go bad and you're left without access for any significant period of time.

And, unless Google and other cloud service providers start servicing local customers using locally-based servers, there will always be a dark side to the cloud that should be considered by all those contemplating the jump from in-house to web-based applications.

Thursday, July 16, 2009

It doesn't matter whether you've got the latest patched version of MS Windows, all the antivirus software money can buy, or the most fabulous firewall in the world - you may still be vulnerable to some of the evils lurking online.

If you need proof of this, just look at how often your Microsoft-based PC regularly downloads patches and fixes for brand new, hitherto unknown vulnerabilities that continue to be uncovered and exploited on an almost weekly basis.

The latest round of patches, issued on Tuesday of this week US-time, includes a raft of fix-ups, including a couple of "zero-day" holes that, until fixed, could have compromised the security of any machine targeted by suitably skilled hackers.

And it's not just Microsoft products that place your systems at risk...

The much vaunted Firefox browser has also shown itself vulnerable to nasty security vulnerabilities this week and, at the time of this posting, no fix has yet been issued for this huge hole.

So, apart from running some esoteric, seldom seen operating system and hoping that it's too small a target for hackers to bother with, just what can savvy computer users do to avoid placing their valuable data at risk?

Linux is one answer but it is also not a golden bullet, just look at the long list of security vulnerabilities reported for one distribution of this increasingly popular Windows alternative.

It appears that the best weapon against having your system compromised is vigilance and good practice.

The truth is that no security strategy is any stronger than its weakest component. There's no point in having the most expensive and capable firewall in the world if your users are free to plug in "bought from home" USB drives that may contain malware.

Likewise, there's no point in dropping your guard just because you've invested in the latest and greatest anti-virus software. Although it's a great help, it's far from 100% effective in detecting and eliminating new threats to your system that may infiltrate other first-line defenses.

Perhaps the only real protection against losing valuable data or up-time to malware is a strong sense of paranoia -- and a good set of regularly refreshed backups.

Wednesday, July 8, 2009

It costs a small fortune to make even a half-decent TV advertisement and even when you do come up with a gem, it soon goes stale after repeated screenings, leaving your audience resentful that you (and your product/service) are interrupting their favourite programmes with your message.

Internet advertising is about to get a whole lot more interesting and TV advertising may soon be so worthless they won't be able to give it away.

A new generation of advertising specialist appears to be surfacing to take advantage of these trends.

These aren't the pony-tailed, latte'-sculling, bran-muffin eaters who have for so long controlled the ad-spend of small and large companies alike. No, these are a bunch of Net-savvy viral marketers who have the potential to offer customers a whole lot more bang for their buck.

Already there are companies appearing who will advise you how to best use new social-media tools such as Facebook and Twitter to promote your product - but I suspect most of them are like the "web-consultants" of old. They really deliver little more than commonsense advice, while charging a fortune for the privilege.

The really good guys are out there coming up with viral marketing strategies that see internet users seeking out their messages and willingly referring them to others in a way that no other medium except the internet can allow.

Some existing TV ads have already gone viral in an almost accidental way, take this Air NZ safety video (YouTube) for example.

With over 3.5 million views, this single video has probably been seen by more people than any of its domestically screened TV ads have been.

And the cost?

Well given that the production costs were already paid before it was put on the internet, this video represents hundreds of thousands of dollars worth of advertising -- for free.

Of course the secret to having a successful viral advertisement that people will actually seek out and watch voluntarily is something that the "experts" will sell you for a fee. However, based on the "bang per buck-spent", it really does look as if viral internet advertising, via YouTube, Twitter or whatever, is the most cost-effective way to promote yourself, your product oryour services right now.

With advancing technology and increasing alternative sources for electronic entertainment, the TV ad might just have had its day and be about to fall from its decades-old position as the "best" advertising medium.

Once again, the internet has shown itself to be a great playing-field leveler. This time it's letting the Net-savvy advertiser reach a huge audience for a fraction the cost it would take to do the same thing using traditional mainstream media methods.

Monday, June 29, 2009

Technology continues to advance at an ever-increasing pace and usually, were better off as a result.

However, there seem to be a growing number of instances where our attempts to improve our lot are having unexpected side-effects.

One example of this which might very well have a nasty effect on the electronics and computer industries is the switch away from lead-based solder.

Yes, the shiny gray metal alloy that for years has been used to attach all those various components to printed circuit boards has undergone a subtle change in recent years - and it's all down to a fear of lead.

Lead, as most people will know, is a soft, malleable and very heavy metal with quite a low melting point (as far as metals go). This low melting point and relatively good electrical conductivity meant that an alloy of tin and lead made the ideal "glue" for connecting wires inside electronic devices.

When the printed circuit was introduced back in the 1960s, there was no lead-phobia. In fact, houses were regularly decorated with lead-based paints, the glaze on kitchenware frequently contained a small percentage of lead and our cars belched clouds of the stuff into the atmosphere thanks to the octane-improving effects it had in petrol.

Roll forward a few decades however, and lead had become a villain.

Everywhere you looked, lead was being shunned as awareness grew of the subtle and devastating effects that it could have on health and the environment at large.

As a result of this, the ROHs directive was introduced and lead-use fell under its umbrella.

One of the first victims of this "lead-be-gone" campaign was a subtle change to the solder used in our computers and other electronics. The toxic lead was dumped, replaced with a cocktail of other metals including bismuth, silver, indium, zinc and others.

Unfortunately, in ditching the lead, these new solders have created what some believe is a disaster waiting to happen.

The issues are manifold but the key one is that of tin-whiskers, think crystals of tin that, over time, grow inexorably out of soldered joints, sometimes to the point where they create a short circuit by reaching adjacent components or circuit-board pads.

Some of the strongest concern comes from within the aerospace community, where a single tin-whisker could bring down an aircraft worth hundreds of millions of dollars.

What's more, the problem isn't something that can easily be weeded-out at production time or even in post-production testing. The tin-whiskers can take years to form to the point where they become apparent or cause problems. The computer that works perfectly today may fail completely tomorrow -- and without any warning.

Although work continues on addressing the issues created by the use of lead-free solders, it appears we're still a long way from coming up with a replacement that is as good as the old stuff.

Some pragmatists in the industry are still lobbying for a return to lead-based solders, claiming that the best way to preserve the environment while also retaining previous levels of solder-joint reliability, is to focus on effective recycling rather than dumping of waste electronics.

The case argued by these people is backed up by the realisation that, even at its peak, the total amount of lead used in solder accounted for just 0.5% of all the lead consumed world-wide. What's more, only 50% of such solder is used for electronics, the balance going into structural soldering such as plumbing etc.

So, if you've got the feeling that your new consumer electronic devices simply aren't as reliable as the old ones were - perhaps you can blame the solder they're now forced to use.

Thursday, June 18, 2009

The technology war taking place between publishers and pirates looks set to intensify, as those who seek to unlawfully copy and disseminate music and video move to lift their game.

A successful law-suit against the operators of the infamous Pirate Bay website has spurred some of the world's smartest and most highly motivated programmers to come up with new tools to thwart the surveillance and detection techniques used by publishers.

The latest addition to their arsenal is a virtual private network (VPN) that effectively hides the nature of content being transferred behind a curtain of strong encryption.

Now, simply monitoring the flow of bits and bytes along an ISPs data circuits will no longer be sufficient to pick up illegally copied software, music or video content.

And, even if authorities suspect that a file transfer contains illicitly copied material, the pirates are ready for them with a tool called BitBinder that cloaks the user's IP address.

Those charged with the job of identifying and collecting evidence against copyright infringers are challenging the claims of the pirates however, and believe that although these new systems may be a hindrance to enforcement officials, they won't completely hide such illegal activity. Nor, it is claimed, will they stop investigators from establishing the identity of those who are offending.

This skirmish comes at a time when governments are struggling to introduce new policies to cope with an ever-increasing level of piracy. Neither France nor New Zealand governments have been able to make their "three strikes" laws stick and both have gone back to the drawing board.

In Britain, the government has directed ISPs to cut illegal file-sharing by at least 70 percent within 12 months. Exactly how this will be achieved once the pirates' new tools become widespread is yet to be revealed.

Some experts have expressed the opinion that this war will only really be ended by way of a negotiated settlement rather than by the appearance of some insurmountable technology from either side.

Unfortunately, no such settlement is in sight, so the escalating arms war between pirates and publishers looks set to continue, at least for the foreseeable future.

Perhaps the next step in the pirate's game will be to harness the power of indirection and, instead of storing an entire copyrighted item on any individual site, distribute it across a network of servers in tiny chunks, randomly extracted from the original work. Downloaders would then simply exchange keys that effectively directed them to the servers involved and allowed them to extract the individual fragments - none of which is large enough to qualify for individual copyright protection.

One thing's for sure -- it's an intensely interesting war with many a strategy yet to be played out.

Thursday, June 11, 2009

Most modern computer systems use some form of LCD monitor to displayoutput and provide visual interaction with the user.

LCDs are relatively energy efficient, compact and are proving to haverespectable lifespans in most environments; almost the perfect solution one might think.

However they're not without their limitations. In some applications (such as laptops and other portable equipment) they can become difficult to read under high level of ambient lighting or direct sunlight. They are also relatively fragile and can be irreparably damaged by a fall or knock with a hard object.

Hard-core gamers frequently complain about the inability of some LCDs to keep up with the pace of fast onscreen action, an attribute that's also clearly visible on some of the cheaper LCD TV screens.

So what's better than LCD?

Well it appears that OLED is the next big thing, and has been for quite some time now.

OLED (Organic Light Emitting Devices/Diodes) differ from LCD in one key aspect -- they actively emit light. Unlike the LCD which simply controls the passage of externally produced light, the OLED is a light source and that means there's no need for a separate backlight or ambient lighting to use them.

Another key strength of the OLED is its speed, something that could eliminate that annoying smear or latency characteristic of so many LCDs.

To date however, the yield and lifetime of OLED devices has been disappointingly low. These two factors have constrained their use to small, relatively low-cost applications.

Now, if you've been watching TV recently, your heart may have skipped a beat when you saw ads for the Samsung Full HD E-LED flat-screen TV.

What? A 46-inch flatscreen LED TV?

How can this be? Surely, if LED display technology was to follow the same evolutionary path as LCDs, we'd see LED-based computer monitors appearing first, wouldn't we?

Well it turns out that the display on this TV set doesn't actually use a matrix of active LEDs to create the image -- it's simply an LCD with an array of white LEDs around the edge that are used to provide backlighting.

A cunning piece of marketing of course and by all accounts, the picture quality is very good -- but a true LED-based TV it is not.

So, if you're still longing for the ultimate in display technology, your wait is not yet over and it maybe a few years yet before we see any really practical and affordable big-screen OLED displays for computers or TV.

In the meantime we'll just have to make do with our LCDs and the occasional CRT, while continuing to read article after article telling us how great these OLEDs will be, when they finally get to market.

Thursday, June 4, 2009

Not only does it clog your inbox with offers for products you probably neither need nor want, but it's also a vector for the delivery of scams, malware and attempts to phish your login details.

Over the years, many different tactics have been tried in an attempt to stem the flow of spam but although some, such as filtering, do a reasonable job of hiding the problem, they are only a symptomatic treatment.

The big problem has always been that the basic services on which internet email is build were designed in an era before the ghastly spectre of spamming ever raised its ugly head.

Those who first designed the SMTP mail protocol never dreamed that, one day, hoards of spammers would hijack their hard work to regularly deliver billions of unsolicited commercial emails to a world of unwitting recipients.

Since then, the attempts to "update" the mail system in ways that would stem the rising tide of spam have been ankle-tapped by an unwillingness to scrap the now deeply entrenched standard on which the mail system relies.

However, there's now a bright light of hope on the horizon.

By dint of its huge dominance on the net, only one entity has the power to replace the current email infrastructure with something brand-new and spam-proof.

That entity is Google and the new platform that may eventually bring us far more spam-resistant email is "Wave".

Effectively forsaking the old SMTP protocol for something newer has allowed designers to start from scratch, moving us tantalisingly closer to the day when spam doesn't represent over 90% of all the email moving around the Net.

But wave is a lot more than just a replacement for email...

Wave is an integration of many different internet services that have popped up on the web over the years. If you understand the concept of synergy then you'll realise just how much more powerful Wave might be than anything that has come before.

It's early days yet but I encourage everyone to fire up their browsers and do your own research on Wave.

Could it be that the tsunami of spam may be laid to rest by the Wave of Google?

Thursday, May 28, 2009

I remember way-back when Netscape was the "browser de jour", some pretty smart people predicting how eventually, conventional operating systems (and Windows in particular) would become redundant.

The belief was that the browser would become the OS and the OS would become the browser.

Instead of an OS, such as Windows, appearing when you started up your computer, the first thing you'd see would be your browser. Indeed, there were rumours that Netscape themselves were working hard on a Windows-killer environment that included a browser and OS all rolled into one.

Well of course we all know that Netscape is no longer the dominant browser, that position having fallen to Microsoft for many years.

While it dominated the browser and OS marketplace, Microsoft did try to integrate the two so tightly as to make them inseparable - but the EU decided such a high level of integration was "uncompetitive" and a forced separation followed.

But now it's 2009 and once again the prospect of browser-based computing has come to the fore.

This time we have (at least in part) the concept of cloud-computing to thank.

Thus, once again, the prospect of your browser becoming not only your most commonly used application but also your computer's operating system appears to be a viable one.

Helping the browser in its "desktop takeover" move is the introduction of HTML5, a new set of standards that looks set to significantly increase the power and flexibility of the humble web-browser.

And now, instead of Netscape, it's Google who is at the forefront of turning your PC into little more than a platform for the web browser.

It's not by accident that Google invested millions in developing its Chrome browser. It knows full well that the winner of the computing game is no longer the one who owns the desktop. Instead, it will be the one who owns the browser and the gateway to the internet.

If, as appears to be the case, a huge transition is about to take place from desktop to cloudtop, the relevance of the operating system used falls significantly. Once people become used to using their browser as the primary interface to their computing world, it won't matter nearly so much whether they're running MS Windows, Linux, or any other OS.

Providing there's a consistent browser interface across all platforms, both the hardware and the OS become irrelevant.

And, he who "owns" the browser, owns the market.

Google is one very, very smart company - as Microsoft may be about to discover to their cost.