Recently Judy Collins appeared at our local bookshop in Rhinebeck, NY that we frequently visit to promote a children's book. Gosh, how time flies! It was just yesterday that we were wearing out her LPs in the college dorms. One of my favorite Judy Colllins song was 'Both sides Now' which contains the line ' I've looked at clouds from both sides now, from win and lose , and still somehow, it's clouds illusions I recall...' So what are the real benefits of cloud? And what is just an illusion? Here are some cloud musings based on my experience in industry solutions for a smarter planet:

How can cloud be used in development for industry oriented applications?

Cloud computing can be used in development for intense simulations of assembly operations. These are the complex simulations of whole systems, as opposed to a single pump for example. Simulations generally require HPC (high performance computing) that can be cost prohibitive for anyone except the largest enterprises. One example is System Verification Management for the Electronic Design Automation Industry. Systems verification is the testing of integrated circuit hardware and embedded software to identify defects. Coverage verification is a type of systems verification using random testing of a chip design simulated in software. Because software simulations run extremely slowcompared to actual hardware, enough tests can never be run to completely verify a chip design. Therefore critical functions are chosen to be “covered” by simulation testing. Modern chip complexity is driving manufacturers to coverage verification to vet design before it hits the expensive silicon and to speed time to market. In fact, Coverage verification simulations approaching 1 million per day on an High Performance Computing (HPC) environment are now becoming the norm. The HPC environment must be utilized and maintained to the maximum extent possible to achieve quickest time to market and that can be achieved efficiently using cloud computing.

Another example is smart grid. One focus for smart grid is demand management or the ability, during brown outs for example, to dynamically deallocate power to nonessential devices like pool pumps and allocate power to schools, hospitals, or certain appliances in your home. This requires visibility up and down the chain of delivery to determine where and how the power is being used and whether its delivery is as efficient as possible. Dealing with a range of variable and unpredictable outages requires the ability to dynamically allocate the compute resources for the task and we have found cloud to be an efficient way to manage a smart grid.

And what about smarter cities? The city of Wuxi in southeastern China, developed a "cloud services factory" to provide computing resources to local companies. Software developers can access new resources in minutes, and new businesses can hit the ground running. Wuxi now has the potential to provide services to hundreds of small and medpium-sized companies, which represent the future of a city that sees itself as an engine for growth.

Are there economic, cultural or other trends that are driving the adoption of cloud computing?

Today, more than ever, the need to drive down the cost of computing while being fully prepared for variable and peak workloads. In addition, if a cloud can handle a mulit-tenent environment with 15, 50, 500 customers all with dynamic processing on the same infrastructure, and same support (monitoring, backup/restore) cost has to decrease due to economy of scale.

I think that the tough economy has definitely spurred interest in cost cutting measures and efficiency; but, I think the real drivers are the emergence and acceptance of virtualization and Service Oriented Architecture in companies. Companies are becoming more technically astute and see the advantages of subscribing to Web Services, applications, storage and services like SPAM filtering in a cloud. In addition, because many people's workstations are now on their phones, pda's and netbooks, it makes more sense to host the operating systems, applications and data on virtual servers in a cloud.

Another factor driving adoption is the need to stay competitive in today's markets. For most business the ability to deliver more applications and services without adding fixed costs helps improve focus on core business competencies. ie: Improve time to market, Increase Business Flexibility and shift from Fixed to Variable Costs. Also, the ability to monitor costs.

What are some of the perceptions or barriers that need to be overcome for cloud computing to gain the widest possible acceptance?

Perception of the lack of bulletproof Reliability,Performance and Security & Privacy. Performance concerns exist about throughput because computing is off-site. Concern exists that data will be secure from competitors eyes in a public cloud.

It’s clear that a variety of security technologies, processes, procedures, laws, and trust models are required to secure the cloud. There is no silver bullet for securing the cloud but who better than IBM with a full breadth and depth of solutions and services enable organizations to take a business-driven, holistic approach to securing the cloud. IBM capabilities empower organizations to dynamically monitor and quantify security risks, to better understand threats and vulnerabilities in terms of business impact, to better respond to security events with security controls that optimize business results, and to better prioritize and balance their security investments.

How will IT change over the next five years or so, because of the influence of cloud computing?

I think IT technologies will more and more be applied to real world (non-IT) assets as we transform utilities, transport, healthcare, buidlings, and cities to a smarter version. This will be made possible in many instances by the power of cloud computing.

An example of a specific opportunity is in the area of storage. As the world becomes smarter and, we are collecting more and more data, and storage requirements are skyrocketing. Today we are approaching a trillion connected sensors that are enabling smarter planet plays such as smarter transportation and smarter healthcare. Being able to farm out the management of the storage devices to experts and pay for what is actually used is already becoming very compelling.

I think 5 years from now we will see much of smarter planet plays being realized and powered by clouds. It will not be clouds illusions we recall but real leverage and value for our industry solutions.

The cars in the moving train were some of the oldest in the transit network...Federal officials had sought to phase out the aging fleet because of safety concerns...saying it lacked the money for new cars...

Hersman told The Associated Press that the NTSB had warned in 2006 that the old fleet should be replaced or retrofitted....

This isn't the first time that Metro's automated system has been called into question...In June 2005, Metro experienced a close call because of signal troubles in a tunnel under the Potomac River... Shortly after that incident, Metro attributed the problem to a defective communications cable...

some observations... It is not clear at this point if 1) a red signal for that block of the track was flagged because a train ahead was already on the track ahead and the train from behind violated that signal authority or if the red signal didn't go up to indicate a potential block ahead, which would indicate a signal system failure or 2) the signal system worked right but the computer system failed to apply the emergency brakes on time (since train was operating in automatic mode) and the operator pressed the mushroom button rather belatedly 3) if the train was traveling faster than the authorized speed on this track and hence not providing enough braking time.

As a FYI, WMATA already uses ATC (Automatic Train Control) which to me is a precursor to Positive Train Control (the new mandate).So I am sure the group that looking at PTC is looking at this incident very closely.

The premise before PTC is that locomotives (or any smart vehicle) can take decisions / actions independent of the central Dispatch center based on 1) the data it gets from the on-board sensors, way side sensors and GPS systems for locational awareness and 2) the policies / rules that have already been established on the on-board device. So it doesn't always (doesn't need to anyway) rely on commands from the dispatch center to act in cases like this. (point here is that a "headless system" is already a guiding principle for this architecture).

I can see where this peer to peer system can be an additional input data point in establishing the seriousness of the situation and need to act in this instance. Our work will allow that control to be distributed to the train, so that, should the situation require it, like loss of contact with the central system, the train could communicate with other trains and other signals and execute the central policies around train control without having to be in touch with the central control system.

With all the stimulus funding for infrastructure, there should be a focused effort on replacing or retrofitting these systems with current technology which would have helped to prevent this horrific accident. The technology and solutions exist today.

We have been developing an aviation solution that addresses some of the contributing problems in the Air France event. Our solution is call Advanced Aerospace Solution Environment or AASE.

The AASE was designed to send and receive real-time flight data from the engines, airframe, and other sensors on the aircraft during in-flight operation to ground operations. When one of the many sensors detects a fault, the information about the fault and other related data is sent real-time to the ground central system and recorded. This process also kicks-off a work flow process to have the problem investigated and replaced if needed when the aircraft lands at its destination.

According to media reports, the issue may have been caused by a air speed sensor reporting incorrect information - an event the AASE solution could have detected and addressed.

Right now the FAA and other foreign agencies are trying to locate the "Black Box" flight recorder to help determine the exact cause. Although the AASE solution does not capture the pilots communication and all the second by second sensor data, it could help with information on critical sensors where faults have occurred. This additional information could help crash investigators determine the cause in the event the black box is not found or damaged. It could also shed light on the maintenance history and the frequency the part in question may have been a problem and not replaced.

Some observations that lead me to believe that the airlines can do a better job managing the lifecycle, maintenance, and engineering changes...something our industry solutions addresses. I blogged about this in the past... Remember American Airlines (and others) having to ground their fleet to rush changes or risk compliance penalties?

Let's start with 2 observations:

1) Air France and Airbus apparently couldn't agree on what the maintenance changes should be

2) Airbus was forced to make changes to the pilots manual

Other data points:

The agency said the A330 had sent out 24 error messages in four minutes including one indicating a discrepancy in speed data. It said similar problems had happened before.

Air France said it had first noticed in May 2008 that ice in the sensors was causing lost data in planes like the A330, but that it failed to agree with Airbus on steps to take.

According to Air France, Airbus offered to carry out an in-flight test on new sensors this year but the airline decided to go ahead and started changing them anyway from April 27. It did not say whether the crashed plane had the new sensors but its last maintenance hangar visit was on April 16.

Some of the A330s 50 or so other operators defended the plane's safety record at an airlines meeting in Kuala Lumpur on Sunday, saying the crash was an isolated incident.

Airbus has faced problems with the speed sensors dating to at least 2001, forcing changes in equipment as well as the pilot's flight manual, according to online filings.

In 2001, France reported several cases of sudden fluctuation of A330 or A340 airspeed data during severe icing conditions and Airbus was ordered to change the cockpit manual, according to the U.S. Federal Aviation Administration.

It is still early days, and we have to wait for final analysis but I believe there is room for improvement, given the data we have and the airlines should make steps towards improvement immediately to address these problems.