Related Links

Recent Comments

September 14, 2011

In my last post "Going Faster Has a Price" I discussed the issues with transmitting bits represented by two states at faster data rates and the problems of inherent loss in the media, ISI and many other phenomenon that screw up the signal. Through careful channel design and active means, engineers can transmit and recover bits over copper cable and back planes with ever greater rates. For example, National Semiconductor and Molex demonstrated 25Gbps+ communications over a back plane at DesignCon 2011 this year. But how long can the industry keep doing this without changing the way we define a bit on a backplane?

This problem is not a new one... as a matter of fact, it is a very old one going back to the early telecom days of modems. In the early days of circuit switched (voice) networks, filters were placed in the system to limit the bandwidth of the signal to around 3KHz which was enough to reconstruct a human female voice without distortion. This was done primarily as a means to frequency multiplex multiple telephone circuits on a single microwave transmission between towers (before fiber-optic lines). So when people tried to move "bits", they were limited to the 3Khz bandwidth.Enter the Shannon-Hartley Capacity theorem (see below).

What this says is the maximum capacity of a channel to carry information is a function of the bandwidth (B) in Hertz and the Signal to Noise Ratio (S/N) which has no units. So as your noise goes up, your capacity to move information goes down. This plagued early engineers and limited the amount of information that could be moved through the network. Early modems used Frequency Shift Keying (FSK). One frequency was used to indicate a "0" state and another to represent a "1" state. The frequencies where chosen so that they would pass through the 3Khz limit of the channel and could be filtered from the noise. The problem is that you couldn’t switch between them faster than the bandwidth of the channel so you were still limited to the 3KHz... so how did they get around this? They used Symbol Coding.

Symbol coding basically combines groups of bits into a single symbol. That symbol can be represented by a frequency carrier and a combination of amplitude and phase. This led to the development of Quadrature Phase Shift Keying (QPSK) or Quadrature Amplitude Modulation (QAM) techniques which are in use today in modern cable modems. The group of bits can be sent all at once instead of one bit at a time... clever! However, it comes at a cost and a fair amount of complexity relegated to the world of digital signal processing.

But what about the high-speed digital signal path between two systems in our modern Internet? Today they use scrambled Non-Return-to-Zero (NRZ) coding which prevents DC wander and EMI issues... but it is still either a "0" or a "1" state - two levels representing the state of a bit. Will this medium ever move to other coding schemes to get more data through the channel as the early telephone system did? It might. Intel and Broadcom are both pushing for a standard that uses multiple levels and symbol encoding for 25 Gbps and beyond. This has the added benefit that more bits can be sent in a single transmission of a symbol. This is already being done today in Ethernet for the 10/100/1000 CAT-5/6/7 standards over UTP cable where the bandwidth of the channel is limited to around 350 Mhz. Will we see this at 25 Gbps and beyond? Possibly...

The problem with this method is power. It takes DSP technology at each end of the channel to code and recover the signals adding energy consumption to the mix. With thousands of channels in a modern data center, that power can add up really fast. NRZ techniques are very low in power consumption. National Semiconductor has produced devices that can move data at rates of 28 Gbps over copper media and back-planes at very low power consumption - something multi-level systems will find difficult to do. The industry agrees and is pushing back on the multi-level proposals.

There may come a day beyond 28 Gbps where there is no alternative but to go to multi-level symbol encoded systems, but I think that may be some time off in our future when 100 Gbps is far more common - perhaps even to your cell phone! Till next time...

March 17, 2011

As you know, if you want a car that you can drive at 150 MPH, then you will pay a premium since it will require additional technology to keep you connected to the road and overcome the frictional forces of the air - as well as the "Gee, I look really cool in this car" effect which at times comes with an even greater price tag. In the physical world of high speed data and signal integrity, these laws also apply. Dr. Howard Johnson knows this well and has published several books on the subject. Even the subtitle "Advanced Black Magic" implies the difficulty in designing high speed systems.

Well folks, it isn’t getting easier. In fact, it is getting far worse. What is interesting about our world is our fundamental quest for knowledge - and the more rich the content of the information, the quicker people learn or share information. There is also the desire to communicate and the later also applies... the richer the content (photos, videos, music, etc) the more appealing the media. With the passing of the DMCA Title 2 which protected service providers from copyright infringement when making local copies to stream (or the unscrupulous pirates stealing it from them) along with the deployment of DOCSIS modems (now version 3.0 exceeding 100Mbps up and down - if the OSP is willing) the stage is set for one of the largest bandwidth explosions ever witnessed by man.

This expansion of bandwidth is driving data center equipment to ever increasing capacities... it wasn’t long ago that 1Gbps was fast... not any more. The norm in data centers now is 10Gbps Ethernet (802.3ae - optical) and quickly moving to 100G Ethernet! The latter has been accomplished via 10 lanes of 10Gbps, but is moving to 4 lanes of 25Gbps which matches the number of lasers and receivers found in most 100G modules. Do you know what happens to a 25G signal when it travels over a back-plane... it isn’t pretty. In fact, 10G has issues as well and it’s amazing that it works at all...

For example, take a look at the image below. This is a comparison of PCI Express signals (generation 1 through 3) over 26 inches of differential traces on a PCB (FR-4). As the speed of the signal increases the eye opening decreases. What used to work without issue now requires either a change in board material or active circuitry to restore the signal. These signals are far slower than a 25-28Gbps stream now being considered for electrical interface to optical modules. Without signal conditioning, careful layout (thank you Dr. Johnson), and good impedance control... no bits, just noise...

If you want to know more about fixing this, visit http://www.national.com/datacom and watch some of the cool videos on how it’s done... as Spock would say... "Fascinating"... Till next time...

December 16, 2010

In the past I’ve discussed topics such as virtualization and digital power to help improve data center processing efficiency. I may even have discussed additions to the 802.3 standard to idle Ethernet drops when they were not in use. However, I have not addressed the interconnect power itself and it was surprising what I found.In medium scale data centers such as those run by financial institutions, large retailers or corporations you will find thousands of server blades and the networking equipment to connect them together. What is interesting about this architecture is that the majority of networking traffic occurs within the data center itself. The reason for this is partially due to the litigious nature of our society and the never ending quest for information to help us understand ourselves. For example, simply performing an on-line stock trade - which to the user is a single transaction - will spawn dozens of additional inter-server transactions to secure, execute, verify and log the event as well as extract statistics used in market analysis. So when millions of people are on-line day trading stocks, billions of transactions are occurring within the data centers.This huge amount of traffic need bandwidth and traditionally this has been accomplished by employing fiber optic cable. Fiber has the advantage of a very small diameter thus providing space for air-flow to cool the systems. Larger copper wire could be used for short hauls, but the diameter would block the air-flow and cause over-heating. Fiber requires light (lasers) to operate and different distances and data rates require different modes of optical transmission. To allow flexibility, equipment manufacturers have created connectors that accept a module that contains the laser and receiver electronics. These are many variants, but the most accepted standards are SFP+ (Small Form-factor pluggable), QSFP (Quad SFP), CFP ("C" or x100 Form-factor Pluggable), XFP (10 Gigabit small Form-factor Pluggable), and CXP. These modules are actively powered and consume 400-500 milliwatts of power each! When you have thousands of them the power quickly adds up. Additionally, the heat generated must be dealt with and the modules are also very expensive.Now what’s most interesting is that the majority of interconnects within the data center are only a few meters long! Normally passive copper cables would work fine but as mentioned above they would decrease the airflow at the back of the equipment. So a clever solution is to use smaller diameter copper wire (28-30 AWG) which suffers from higher loss and place active drivers and equalizers such as the DS64BR401 in the connectors which fit these standard module sockets. This technique is called "Active Copper" or "Active Cable" and has many benefits in less than 20 meters runs. The first benefit is cost - these cables can be less than half the cost of the fiber module and cable. The second is power - active cables can reduce the power consumption significantly if properly designed (< 200 mW vs. 400mW for fiber).Fiber will always have a place for carrying data long distances for which it excels. However, in the data center copper wire is regaining ground with the help of active electronics may be the majority of media carrying your next stock trade! Till next time...

August 03, 2010

In an earlier post ("Engineer This!") that I published almost exactly one year ago, I challenged engineers world-wide to solve some fundamental issues regarding energy - I think everyone is still working on those (my fusion powered electric car hasn’t been delivered yet...). However, the other day I was talking with a very wise man I met from Egypt. It is not often you find such wisdom in an individual and so I wanted to absorb as much of this as possible... we were talking about how we influence one another and how our little actions effect so many. He said something so profound it stopped me in my tracks. He said, "You can know exactly how many seeds are in an apple, but you will never know how many apples are in a seed".

Nikola Tesla or Thomas Edison would never know how their great inventions would affect so many lives. Simply removing those two individual’s contributions from history would almost immediately lead to the death of half of our population followed by many more dying from diseases. It is not to say that those technologies would never have been invented, it simply states how wide spread those technologies have become and how much humanity depends on them. But this post is less about great inventions and more about how we as engineers touch others through our work and also how we can help the next generation of engineers become even greater.

When I was a young designer I had a mentor. His name was Jorge and he had worked for Control Data Corporation along side greats such as Seymour Cray (father of the vector super computer and the founder of Cray Corporation). I was very fortunate to have had him as a mentor. He challenged me daily and always continued to push my abilities as an engineer. He would assign me design challenges even if he had already completed the design just to see what I would come up with - on occasion, I would surprise him. I left that job in 1984 to join National Semiconductor and also had a string of mentors during the development of my career - and to this day continue to have mentors to help guide my decision processes.

So what is this all about? We’ll here’s another call to action for all of you out there... and I’m pretty confident these can be accomplished long before my fusion powered electric car reaches my garage!1. Become an Engineering Mentor - Young minds are fertile ground and combined with the enthusiasm of these new designers they will learn what ever you have to offer. Many companies have formal mentoring programs and they will help you find a suitable candidate. In my career I have mentored three young engineers (2 post graduates and 1 undergraduate) and it has been one of the most rewarding things I have ever done.2. Find a Mentor - I am a true believer that you can never know too much or have too many skills as an engineer (or as a human being). Much of what I have learned about Engineering and business has come from older, more experienced engineers and businessmen and I owe much of my personal success to them. Finding a mentor is less difficult since there are always those inside an engineering lab that know the "old man" or a "guru" of something. Usually they will be more than happy to teach you what they know.3. Share your Knowledge - Join a professional organization such as the IEEE and attend meetings. There you can share your ideas or discuss possible ways to solve a problem. In many cases you will help someone else solve something by sharing your expertise. There are committees and teams you can join to set standards or work on larger issues which can be extremely rewarding.

Whatever you chose to do realize that, as my wise friend had stated, "From the apple seed many new apples will be created." Your action will start in motion the future of many new engineers to what extents you may never know. In addition, I personally feel if you want to learn a subject, you should teach it - that forces you to learn it better than your students (or Mentee). So either way you may learn something new or even something about yourself in the process. Till next time...

May 27, 2010

This week I thought I’d diverge a bit from the Energy Efficiency theme and share some funny engineering experiences I’ve had over the years. I’ve been a practicing engineer for over 30 years and have been involved reviewing projects and circuit designs a good part of that period. I’ve collected a list of several of my favorite oversights (the names have been withheld to protect the guilty). Hope you enjoy these as I did when I found them!

Isolate YourselfOne of the funniest (and probably most embarrassing) "faux pas" I’ve seen was the attempt to provide high voltage isolation for a power supply. In a design review I had recommended a 0.100" clearance between the primary and secondary side. About two months later the engineer called me and was concerned about something in the design that was causing them to fail their HV isolation test. He emailed me the layout and in about 10 seconds it became apparent what had happened. The designer passed my recommendation to the layout technician who instructed their auto-router to provide the 0.100" clearance between the primary and secondary side... the auto-router did just that - funny how machines do exactly what you tell them. The software provided exactly 0.100" clearance between the PCB lands on the primary and secondary sides of the transformer but left the default clearance of 0.010" on all the others. It takes about 70kV to jump 1" in dry air... it would only take around 700V to get across the barrier the auto-router provided thus the failing board. Luckily, it was an easy fix and the board passed without issue... the moral of the story? Don’t think your auto-router knows more than you do...

Stability Is Your FriendOne day I get a call from an aerospace engineer that was concerned that a batch of voltage regulators were not working properly in their circuit. The design had been working fine for years, but lately they were experiencing failures due to an apparent oscillation in the output voltage of our regulator... this was a very "mission critical" military application that was using a linear voltage regulator to provide a clean voltage in a projectile. I asked the engineer to describe the design. He went on to describe the input power stage and the large heat-sink mounted on the side of the projectile where the regulator was mounted. The regulator’s leads were then wired to the PCB where they provided the regulated voltage - about 16 inches away. The first question I asked was where the output capacitor was located - it was on the PCB 16 inches from the regulator! I choked for a moment realizing that it was amazing these things were working at all - real kudos to the National Semiconductor engineer that designed the regulator. Effectively what the aerospace engineer had done was place parasitic inductors (the wire) in series with the capacitor shifting the stability point of the regulator right to the edge... most of them worked at the temperature where these "projectiles" were deployed... the new batch of regulators had higher gains and thus were oscillating. I gently told the guy on the phone the bad news and referred him to the data sheet where it clearly stated how close the output cap had to be to the regulator to guarantee stability. This raised the next question, "if they are working now, will they continue to work?" The answer was - maybe. It depends on so many factors including process aging, temperature, the type of wire used and gauge. The engineer on the phone suddenly inhaled, thanked me for my help and hung up... I assume that the problem was resolved. I didn’t sleep well for about a year after that.

The Linear Boost Converter - Not!OK, this one really made me think... where did we go wrong in writing the data sheet. I received this email from an "engineer" that was wondering why his circuit was not working properly. I replied and asked for that section of the schematic so I could review the design. Within the hour an email with an attachment showed up. I opened the PDF and had to stare for a minute... this could not be right. They were using a linear regulator where the input voltage was lower than the programmed (resistor divider value) output voltage. They were supplying 5V to the regulator and expecting 12V at the output! This is fine for one of our boost simple switchers, but not going to work for a linear regulator. OK, now I wondered how I was going to respond to this... so I sent a copy of the data sheet pointing out the "drop-out" voltage or loss component of the regulator along with a boost power supply application note and introduced this person to the world of switching regulators. I doubt that would happen today since integrated switching regulators are so common and most likely taught in university programs.

DisclaimerAny similarities to the above problems are coincidental and not intended to make you feel bad if you made the same mistake... we’ve all made mistakes - the real question is "did you learn from them?" Engineering history is filled with stories of failures or bad decisions ("let’s use the most reactive chemical in the world to float a dirigible" kind of thing). But the main thing is that we learn and improve our skills as engineers - I certainly have tried and continue to do so every day. Hope you enjoyed these tails from the past. Till next time...

May 14, 2010

I often write about saving energy, improving efficiency, and lowering your planetary impact... so it’s time for me to come clean and show you my efforts to reduce my carbon footprint. When I first designed my home back in 2000, I wasn’t thinking energy costs were going to skyrocket. Instead I went for good efficiency, but not great efficiency... I’m paying for that now. Even though our home is built from solid concrete poured walls with very high "R" factors, the overall open design allows large amounts of leakage through many avenues such as doors and windows. Along with the basic window films, improved insulation, and better living habits we still struggle trying to keep our home comfortable, yet efficient in the use of electricity and propane gas...

I’ve given this much thought over the years and have recently embarked (as mentioned in my prior post, "Ignorance is Bliss") on a massive project to automate, well... just about everything that can be automated in our home. The idea is to instrument everything (or most things that use power) to understand where the energy is going and to use that information for making decisions on energy use. For example, if the TV and lights are on in the family room and the alarm system is set to "AWAY" mode, then the system should turn off the TV, adjust the thermostat to save power and turn off all the lights. In everyday life, we are so caught up in our schedule that remembering to do these simple things falls far down on our list. A "Smart Home" that knows your lifestyle can save you power if it is properly equipped - that’s where "getting your hands dirty" comes in... however, it feels like I’m trying to move an eight lane highway without disrupting the flow of traffic - not so easy (see my wiring closet photo below).

I never thought about how isolated our home's systems were until I began this project. The lights were originally manual switches (I was the automation) which I replaced over a period of a year with Universal Power-line Bus (UPB) smart switches that are networked together over the power line (no new wires). The thermostats were individual manual units without setback or other communications ability. The hot water heater is gas (propane) and is simply on or off... same with the recirculation pump. The appliances have no power metering or timing ability and cannot communicate with anything - except a human operator. The list goes on... so you can see the complexity of trying to tie all of these disjointed systems together as well as adding the sub-metering ability.

So I’ve begun by prioritizing the largest users of power that I can control... My list is HVAC, Lighting and hot water (propane). I need to know what’s on as well as the state of the home (security set away or home, time of day, weather conditions, etc.) to make proper decisions. The lighting system is 90% complete - most all switches are automated and networked so I can address a single unit or using the protocol, address all units at once via "links". These links are pre-programmed to take a switch to a certain level (on, off, 20%, etc.). So using an "all off" link, I can turn all the lights in the house off in one command.

The HVAC is a bit more complicated due to multiple air handlers... they need to be coordinated so they are not fighting each other to cool or heat the home. A zoned system would have been much better (I wasn’t watching the store that day...), but we have what we have. So, replacing each thermostat with a computerized setback version was the first step (and most reasonably priced solution). This has worked to greatly reduce our consumption in general, but there’s still money on the table. The next step is automated thermostats with communications ability. These can be networked (RS-485, UPB, etc.) so that computer software can force a condition (off, setback, etc.).

The hot water is simpler since we have a recirculating pump that can be turned off - this limits how much water is being heated and can dynamically be turned on when people are home thus saving propane. The appliances are a different matter. There have been talks for years of appliance communication standards so that HA systems can have control (the universal remote for everything, etc.). Each appliance manufacturer had their proprietary scheme of how it should be done, and after years of trying to come to a common standard, it fell apart. This was partially due to a lack of a "need" - no one could rationalize why someone might want to control their washer, dryer, oven or dish-washer from a home computer... until oil prices shot up sending electricity costs through the roof. I personally felt that one and I’m sure you did too. However, no one in our home has finished cooking and left the oven on (so far), so that’s pretty low on my priority list...

If you want to know more about the ancient attempts for unifying everything in the home, check out the EIA-600 CEBus standard... some really great OOP concepts, but it never flew. Also UPB, INSTEON, and Z-Wave are all lighting (and other equipment) control standards with products available today... I’ll keep you updated as I try to finish what I’ve started, but as they say, "The Blacksmith’s kitchen often has wooden utensils". Till next time...

April 05, 2010

Imagine that it’s now the year 2093, two hundred years after the Columbian World’s Exposition of 1893 where Westinghouse lit the event with 100,000 incandescent bulbs amazing the Victorian visitors with artificial electric light. In this future at the end of the 21st century the electronics industry has greatly matured and also diverged. Micro-electromechanical Systems (MEMS) have merged with analog semiconductor technology to create entire laboratories on silicon and diamond that fit on a pinhead, while digital functions have moved to the quantum mechanical realm of matter.

Digital chips are no longer referred to as microelectronics but rather nano-electronics utilizing groups of quantum dots to form interconnects and logic. Small geometry CMOS processes faded away around 2020 (the last were sub 16 nanometer 3D structures) with the introduction of production grade high temperature Double Electron-layer Tunneling Transistors (DELTT) and some limited quantum interference devices. These quantum well devices were unipolar having both positive and negative transconductance based on gate voltage. This eliminated the need for complementary device types ("n" and "p") resulting in greatly simplified structures. These devices were eventually replaced with Quantum Dot Transistor (QDT) variants.

Logic is no longer based on electron currents but rather on electron position... using this method molecular size gates and functions are common place. Computers now run at equivalent clock speeds exceeding 40,000 GHz (although no clock is running) thanks in part to quantum effects and new quantum architectures. The additional byproduct is extremely high energy-efficiency resulting in almost no waste heat.

Analog semiconductors on the other hand have merged with nano-scale machines to form complete sensor and analysis engines on a single device. These devices are so small and consume so little power that a complete health monitor fits into a ring the size of a wedding band. Utilizing mechanical and thermal energy harvesting, no batteries are required and communication with the "net" is accomplished through large arrays of nano-access-points spread like fertilizer across the countryside. Even structures have pea-size sensors embedded right in the concrete mix during construction that utilize RF energy harvesting to relay the state and stresses providing status in real time.

The world of 2093 as seen in this vision would not exist if not for the never ending march of semiconductor performance - both analog and digital. Many of these "proposals" of our future are based on research going on right now across many disciplines. Economic progress dictates growth and if there is to be growth in the semiconductor industry, engineers and researchers will find ways to navigate the physical laws of our world to make gains in performance. My vision is not everyone’s, but with some historical review, a look at current technologies in production, and a little bit of imagination I’m sure you can imagine the world of 2093 as I do... potentially so far advanced that we (like our Victorian predecessors holding an iPhone) would not even recognize the technology! Comments are always welcome - until next time...

March 17, 2010

As I discussed in part one of this series, electrical power and lighting was débuted at the 1893 World’s Columbian Exposition by Westinghouse and Nicola Tesla. Now, a little over 100 years later we carry in our pockets technology so advanced that the Victorian era population would not even recognize it as such - in fact, it may have been considered magic or supernatural.

But what is the next stop beyond the current generation of digital and analog semiconductor devices? I recently had a chance to talk with National Semiconductor’s Chief Technology Officer and Director of NS Labs, Dr. Ahmad Bahai. I asked him what he thought of today’s high performance processes and the never ending progress to smaller geometries. He said, “Analog scaling doesn’t buy you any performance advantage” – primarily due to the way the transistors are used. Analog functions still require isolation and moving things closer only complicates the issue.

Digital processes use transistors as switches avoiding the “in-between” or linear regions of the devices. Analog semiconductors actually take advantage of the linear region to provide amplification and accurate control of voltage and current. In mixed signal semiconductors such as Analog to Digital Converters (ADCs), there is a percentage of digital logic, but it is usually such as a small percentage of the function that shrinking it doesn’t provide great benefit.

I also asked him what he thought about digital process nodes below 22nm. He made an interesting comment, “below 22nm is it (the semiconductor) electrical or mechanical?” This raises an interesting question… at these tiny geometries is it easier to build switches from mechanical functions? With so much research going on with carbon nano-tubes, it may remain in the realm of electrical for the near term. This along with the ability to stack circuits may continue to raise the transistor count to give Moore’s law another 10+ years.

For now the move in analog semiconductors will be to enhanced processes such as Silicon Nitride (SiN), Silicon Carbide (SiC), Gallium Nitride (GaN) and other familiar materials to increase the power handling capability – a property very important to energy management and technologies such as the Smart Grid. Beyond that, you’ll have to wait until part III… till next time…

February 24, 2010

At the 1893 World’s Columbian Exposition (aka: Chicago World’s Fair) Westinghouse and Nikola Tesla introduced the world to alternating current as well as the Westinghouse brand light bulb of which 100,000 were used to light the event. There was a great optimism with regards to technology and the future during that period and a little over 110 years later I believe that enthusiasm still survives. The idea of an electric world where everything would be powered by this magical force won over many of the visitors to the fair and set in motion a technological revolution.

Today, electric power is so common, that it is considered ubiquitous and quickly obvious by its absence. It is unthinkable that a residential or commercial structure would lack electric services. Our very existence is dependant on electric power and without it much of the world would die due to starvation and disease. We often take this modern marvel for granted and realize just how much we depend on it when it’s not there (I have first hand accounts during the 2004 Florida hurricane season when friends lost power for weeks).

So looking forward into the 21st century, how will electricity be viewed in 2093 - 200 years after the 1893 event? I think it will be considered the primary power source for everything including all transportation and personal vehicles. The raw energy sources for the current state-of-the-art power plants come from many different forms and many rely on carbon based fuels. This will shift eventually toward cleaner forms of energy such as harvesting (e.g. wind, solar, wave, etc) and other nuclear methods (e.g. LFTR technology, fusion, etc. - see my post, "The End Of The Carbon Age"). This shift will provide electrical power at lower costs, and combined with improved storage and transmission technologies will finally give us an all-electric infrastructure.

But what is the future of the semiconductor industry in an all-electric world? It is hard to fathom how semiconductors of the late 21st century might be fabricated or what functions they may provide. Carbon nano-tubes may replace silicon as the material of choice in future devices - as Yoda might say, "The future, cloudy it is..." But there can be no doubt that semiconductor nanotechnology will be central to the everyday life of the citizens of that era just as it is today. It is difficult in our modern world to avoid using something that does not contain electronics. So the next time you pick up your cell phone to text your friend, log-on to the W3 or play that game with your PS3, remember to thank a semiconductor engineer (or any engineer for that matter).

Over the next several weeks I will be exploring possible technologies that may emerge as the functional device building blocks of the next wave of semiconductors. There are so many candidates I will try to focus on the key technologies so reasonable predictions can be made. So get out your virtual time machine and let’s take a look into the future... cloudy as it may be! Till next time...

February 04, 2010

I’m taking some literary license with the title of this post to borrow from the works of Robert Burns (i.e. a poem entitled "To a Mouse") to contrast a similar symbolism found in the poem itself... in our case the end of computer mice and the rise of "touch". With their latest entry, Apple’s iPad along with many other touch enabled smart phones and touch screen computers are laying the ground work to retire the computer mouse to the realm of museums.

It is curious to note that analog technology is the first to "see" information... that is, humans and the world around us is analog in nature. So when information is gathered, it is first in the form of analog signals. It is also the last - LCDs convert digital pixels into light (using analog transistors on the glass) for us to see and perceive. Audio is produced by amplifiers driving mechanical sound generators (e.g. speakers) to produce sound. Seeing and hearing are native to humans and so is touch.

A baby instinctively understands touching and being touched. They first communicate via primitive exchanges of touch and learn to coordinate seeing and touching as they grow... it is as natural as breathing. However, for the past 20 plus years or so, the computer "mouse" was a mechanical means to "point" and "click" on an object on a display. It is not a natural motion to move a "mouse" which in turn moves a "cursor" that the user attempts to align with an object located on a display. These are disjointed actions and often difficult to learn - but learn we must since it is a requirement of modern computer operating systems! Imagine trying to use Windows or OS X without a mouse... I have... it isn’t pretty.

But through the work of people like Jeff Han and his company Perceptive Pixel, Apple Computer, Microsoft, and many others, the concept of touching a computer display is now well accepted. I own an iPod Touch and the interface is so natural, I wish all computers worked this way. But engineering touch into displays has been an uphill road with many barriers. Resistive methods expose circuitry to electrostatic discharge, capacitive methods require more complex electronics, camera methods are not easy to implement on LCD displays, and the list goes on. However, I will predict that in 5 to 10 years, like LCD backlights moving to LEDs, all displays, whether they are OLED, LCD or other technology, will be touch enabled. They will be built that way by default.

Touch is emerging as one of the last great interfaces between humans and machines (spoken word is close behind). With Windows 7 having native touch support and Apple’s iPad designed for touch it should be obvious that finally, the natural analog method for interfacing to a machine (point and touch, flick, drag, draw, etc) will soon be ubiquitous and machines without it will be considered "yesterday’s technology".

I often think back to the days of alphanumeric only displays on computers. Yes... I am that old. You had 256 characters to use (some of which could be placed next to, above and below to form primitive graphics) and most of the information displayed was text... there were no (or extremely limited) graphics. If the system was a work station, the graphics were formed by electron beam vectors on cathode ray storage tubes which "drew" the drawing while you watched... we’ve come so far. However, that was not that long ago and today almost all computers have fantastic graphical capabilities! So mark your calendar... it’s only a matter of time before the mouse (like the poor mouse in Burns’ poem) is history.