Month: June 2007

I mentioned a lecture on Emerging Technologies in MIT Courseware earlier. Here is a slide titled “Final Lesson”, part of the summation.

This calls for a new kind of training. Teaching people the ability to adopt and use new technologies to do what they do better. Also teach them to understand how the technology will affect what they do today. This is another one of those basic skills you need to acquire to thrive in the new world.

I think problems inspire creativity. Some times the creativity results in art. This is the case with Santhosh, a student at BMA India, one of the most impressive institutions, I have come across. They have a great facility, with state of the art computer labs. I was there last week, giving a couple of talks.

Santhosh, a student of BMA, walked into the class with this drawing. I asked him what the story was.

Like most of the commuters in Bangalore, Santhosh gets stuck in traffic, once in a while. The problem is acute and he spent some time to visually depict it. I was pretty impressed, and requested a copy of the drawing and promised to blog about it. So hear it is.

Traffic is a big problem in India especially in major metros. It is not only poor infrastructure, but also the mix of different types of vehicles on the road.

This is probably one of the best stories I heard in a while. We are in Bangalore where I am giving some talks and found this piece of news in a local newspaper.

The school incentive program is part of the mayor’s wider antipoverty initiative, which also includes other cash payments, all raised privately, to influence behavior and reduce poverty. Details of the various incentive programs were announced yesterday by Linda Gibbs, the deputy mayor for health and human services, at a briefing at City Hall. The incentive programs are expected to attract more than 2,500 families in Harlem; Brownsville and East New York in Brooklyn; and the Morris Heights and East Tremont sections of the Bronx, she said.

Cash incentives for adults will include $150 a month for keeping a full-time job and $50 a month for having health insurance. Families will also receive as much as $50 per month per child for high attendance rates in school, as well as $25 for attending parent-teacher conferences.

A program like this may go a long way in educating poor children in countries where parents would rather have their children working to support the family.

The cool thing about this program is that all the funding is from private sources.

Critics also pointed to the limitations of links that pointed in only one direction and were untyped. The Web’s success has to a large extent overridden these criticisms without really proving them wrong. Ironically, it now seems that many of the early criticisms weren’t exactly incorrect per se, but merely shortsighted.

The W3C’s Semantic Web Education and Outreach group recently agreed to support a community project called Linking Open Data on the Semantic Web. The project’s goal is to make various open data sources available on the Web as RDF and to set RDF links between data items from different data sources. Groups in the field made progress before the community project even began. Examples include the dbpedia.org project and the D2R Server publishing the DBLP bibliography.

The Web’s utility does depend on its level of deployment—the network effect—and it’s doing rather well there. But it would be disappointing if, after all this time, the Web was only just catching up technically with its predecessors. Virtually all the hypertext features said to be lacking from the Web have been formalized within various specifications. Conceptually, the key is viewing the link as a unit of data. If this view is overlaid on the current Web, then not only are the shortfalls the critics describe illusory, but there’s still a huge amount of untapped potential in the Web even in its current “simplistic” architecture. We’re entering interesting times.

My first introduction to Hypertext was through a special issue of BYTE magazine. I recall reading every article on that issue with excitement of the potential of a link. Then I met Doug. I was amazed at how much his Augment did with links.

I am glad to read this article from Danny for it brings back all those linked memories.

I was watching the video of Seven Habits of Highly Effective Text Editing. I am sucker for anything titled Seven Habits. I use Vim (Vi improved). Vi was the first editor I started with on Unix and since there were DOS versions available, kept using it. So when I saw a session on Vim on Google video, I decided to watch.

Here is Bram Moolenaar’s mantra

Detect Inefficiency

Find a quicker way

Make it a habit

I think this applies to programming as well and probably many other areas.

This is one area where we can definitely benefit from a standard representation.

The Open Geospatial Consortium has dubbed Google’s Keyhole Markup Language – the language used for developing Google Earth – a best practice and is working with Google and other OGC members including ESRI and Autodesk to make sure KML integrates well with such other standards as the Geographic Markup Language.

An OGC official said the main advantages of making KML a standard are that it speeds development of Web-based mapping applications, encourages greater interoperability of products and ensures easier movement of data between applications. OGC expects KML 3.0 to be released as a standard early next year.

Commercial software—the kind you sell to other people—is a game of inches.

Every day you make a tiny bit of progress. You make one thing just a smidgen better. You make the alarm clock default to 7:00am instead of 12:00 midnight. A tiny improvement that will barely benefit anyone. One inch.

There are thousands and tens of thousands of these tiny things.

It takes a mindset of constant criticism to find them. You have to reshape your mind until you’re finding fault with everything.

And as you fix more and more of these little details, as you polish and shape and shine and craft the little corners of your product, something magical happens. The inches add up to feet, the feet add up to yards, and the yards add up to miles. And you ship a truly great product.

The saying goes, “a brand is a promise.” On a personal level, I’ve always felt that statement was incomplete. A promise is the lowest common denominator of a brand – it’s what people expect. Think of your favorite brand, whether search engine or sneaker or coffee shop or free software, and you’ll know what I mean – a brand is an expectation. If you experience anything less, you’re disappointed.

But a brand must go beyond a promise. To me, a brand is a cause – a guiding light. For fulfilling expectations, certainly, as well as dealing with the ill-defined and unexpected. It’s what tells your employees how to act when circumstances (and customers) go awry, or well beyond a training course. My first real experience with that was a personal one.

For people in the multi-core, high performance computing, this may be old hat. But I stumbled upon this when I was catching up with LTU Blog after a long time. This part of the paper caught my attention

The point is to identify the kernels that are the core computation and communication for important applications in the upcoming decade, independent of the amount of parallelism. To develop programming systems and architectures that will run applications of the future as efficiently as possible, we must learn the limitations as well as the opportunities. We note, however, that inefficiency on embarrassingly parallel code could be just as plausible a reason for the failure of a future architecture as weakness on embarrassingly sequential code.

One of the thoughts that keep coming back to me is how unprepared we are as an industry to take advantage of parallel computing and leverage multi-core. I am sure that research groups in all major software companies are chipping away at this problem but there is no broad visibility or urgency to tackle it.

I was looking for a programming model for parallel computation. Something you can teach to the next generation of programmers. I have not found much, but this paper was a good starting point for me.

Since real world applications are naturally parallel and hardware is naturally parallel, what we need is a programming model, system software, and a supporting architecture that are naturally parallel. Researchers have the rare opportunity to re-invent these cornerstones of computing, provided they simplify the efficient programming of highly parallel systems.

We believe that much can be learned by examining the success of parallelism at the extremes of the computing spectrum, namely embedded computing and high performance computing. This led us to frame the parallel landscape with seven questions, and to recommend the following: The overarching goal should be to make it easy to write programs that execute efficiently on highly parallel computing systems.Instead of traditional benchmarks, use 13 “Dwarfs” to design and evaluate parallel programming models and architectures. A dwarf is a pattern of computation and communication. Dwarfs are well defined targets from algorithmic, software, and architecture standpoints

This group took Phillip Colella’s “Seven dwarfs”, added seven more by examining the following areas:

Embedded Computing (EEMBC benchmark),

Desktop/Server Computing,

Machine Learning,

Games/Graphics/Vision,

Data Base Software

The result was a list of 13 dwarfs in the following areas. This paper is fascinated reading. If you are interested in tracking this space, there is a wiki based web-site.

This space is exciting. There are lots of opportunities in research into parallelism – languages, communication and computing patterns, optimization, tools for monitoring and tuning performance, new algorithms and brand new applications.