Archive for the ‘Trust’ Category

Imagine a personal filter that connects people to web-content and other people based on interests and memories they uniquely have in common – enabling discovery of new knowledge and personal connections. High quality matches offering freedom from the chore of reading every item so you don’t worry about missing something you care about in your email, on the web, or in a social network.

Background

There are 1.3 billion people on the web and over 100 million active websites. The Internet’s universe of information and people, both published and addressed to the user is growing every day. Published content includes web pages, news sources, RSS feeds, social networking profiles, blog postings, job sites, classified ads, and other user generated content like reviews. Email (both legitimate and spam), text messages, newspapers, subscriptions, etc. are addressed directly to the user.

The growth of Internet users and competition among publishers is leading to a backlog or heap of hundreds or thousands of unread email, rss, and web content in user inboxes/readers — forcing users to settle somewhere between the extremes of either reading the all the items or starting fresh (as in “email bankruptcy”) .

Search is not enough. If you don’t have time to read it, a personal filter reads it for you. If search is like a fishing line useful for finding what you want right now, a personal filter is a fishing net to help you capture Internet content tailored to your persistent interests — for which it is either painful or inefficient to repeatedly search and get high quality results on an hourly or daily basis.

Techniques

Personal filtering can help users recover the peace of mind that they won’t miss an item important either in the email inbox, on the web, or in a social network — provided matching algorithms deliver sufficient signal-to-noise ratio (SNR). For example, an alert for “Devabhaktuni Srikrishna” produces relevant results today because it is specific – i.e. SNR is high. Only 1:100 or so alerts for “natural language processing” or “nuclear terrorism” end up being relevant to me. Signal is low, noise is high. So SNR is low but can be improved through knowledge of personal “context” gained via emails, similar articles, etc. Successful/relevant matches are all about context – what uniquely describes a user, and can we find it in other news or users?

The relevance of a match should be easily/instantly verifiable by the user.

Statistical analysis of phrases is proven to be a useful method to extract meaningful signal from the noise and find precise matches.

Phrases describing each person are in what users write, read, and specify.

Framework

Let’s define the user corpus to be all content describing users, which includes the user’s own email, blogs, bookmarks, and social profiles. Interests can also be derived from items or links the user decides to upload or submits specifically by email – “like this.” As new information arrives the most likely and statistically useful interests are automatically detected through natural language processing algorithms (aka text mining), similar to Amazon’s Statistically Improbable Phrases (SIPs). The interests are displayed on a website analogous to friend lists in social networking, the interest list is automatically ranked, and the user vets the list to keep only the most meaningful interests.

Once a user’s interest list is generated, it can be used to filter all information in the entire web corpus via open search engine APIs (such as Yahoo-BOSS and Live Search API 2.0) to create a personal search index. The filter may also be aimed at a subset of the web which we call the user-targeted corpus configured directly by the user – including incoming email, alerts, RSS feeds, a specific news source (like the Wall Street Journal), social networking profiles, or any information source chosen by the user. There are now open-standard APIs to email (like IMAP for Gmail and Yahoo Mail Apps), social networks (like Facebook API and Opensocial for Linkedin, Hi5, MySpace, and other social networking sites), and newspapers like the New York Times.

The result: A manageable number of high-quality filtered matches, ranked and displayed for the user, both viewable on the website and in the form of a periodic email (daily update).

Personal Music Filtering

Recommender Systems

Related Posts

…no one has yet to put together an end-to-end Persistent Search offering that enables consumer-friendly, comprehensive, real-time, automatic updates across multiple distribution channels at a viable cost.

Because let’s face it, Personalization + Clustering is the next big step in RSS. If 2005 was about Aggregation, then 2006 is all about Filtering.
Nik wrote up his thoughts today, in a post entitled Memetracking Attempts at Old Issues. While he mentions lack of link data as being an issue, it seems to me the crux of the problem is this:

“generating a personal view of the web for each and every person is computationally expensive and thus does not scale, at all.”

He goes on to say that “this is why you don’t have personalized Google results – we just don’t have the CPU cycles to care about you.”

…if you think it is hard enough to keep up with e-mails and instant messages, keeping up with the Web (even your little slice of it) is much worse…. I need less data, not more data.

Bringing all of this Web messaging and activity together in one place doesn’t really help. It reminds me of a comment ThisNext CEO Gordon Gould made to me earlier this week when he predicted that Web 3.0 will be about reducing the noise. (Some say it will be about the semantic Web, but those two ideas are not mutually exclusive). I hope Gould is right, because what we really need are better filters.

I need to know what is important, and I don’t have time to sift through thousands of Tweets and Friendfeed messages and blog posts and emails and IMs a day to find the five things that I really need to know. People like Mike and Robert can do that, but they are weird, and even they have their limits. So where is the startup that is going to be my information filter? I am aware of a few companies working on this problem, but I have yet to see one that has solved it in a compelling way. Can someone please do this for me? Please? I need help. We all do.”

A given song is represented by a vector containing approximately 150 genes. Each gene corresponds to a characteristic of the music, for example, gender of lead vocalist, level of distortion on the electric guitar, type of background vocals, etc. Rock and pop songs have 150 genes, rap songs have 350, and jazz songs have approximately 400. Other genres of music, such as world and classical, have 300-500 genes. The system depends on a sufficient number of genes to render useful results. Each gene is assigned a number between 1 and 5, and fractional values are allowed but are limited to half integers.[1] (The term genome is borrowed from genetics.)

Given the vector of one or more songs, a list of other similar songs is constructed using a distance function.

To create a song’s genome, it is analyzed by a musician in a process that takes 20 to 30 minutes per song. Ten percent of songs are analyzed by more than one technician to ensure conformity with the standards, i.e., reliability.

The technology is currently used by Pandora to play music for Internet users based on their preferences. (Because of licensing restrictions, Pandora is available only to users whose location is reported to be in the USA by Pandora’s geolocation software).[2]

I had an idea to make a personalized news feed reader. Basically, I’d register a bunch of feeds with the application, and rate a few stories as either “good” or “bad”. The application would then use my ratings and the article text to generate a statistical model, apply that model to future articles, and only recommend those it predicted I would rate as “good”. It sounded like a plausible idea. I decided to start a pet project.

I soon learned that this idea wasn’t original, and in fact had been attempted by quite a few companies. The first to seriously implement this idea was Findory, later followed by Thoof, Tiinker, Persai, and probably others I’m not aware of. As of this writing, only Persai is still in business. Apparently, personalized news feeds aren’t terribly profitable. Why they’re not a commercial hit is a whole article in itself, so I won’t go into it now. However, before I admitted to myself that this project was doomed to failure, I decided to implement a few components to get a better feel for how the system would work. This is a review of a few interesting things I learned along the way.

The site’s most powerful feature is its robust search function, which allows users to search for others using many criteria. After creating a search, users can choose to have the site persistently monitor for any matches in the future.

One of the paramount abilities of a good analyst is to spot trends early and realize their potential impact on a company or industry. What analysts are usually searching for is any hint of weakness or strength in competitive advantage. Sometimes the smallest trends start in the local newspapers. Google News makes locating those topics and stories much easier.

If you pair Google News with the enhanced filtering ability of Yahoo! Pipes, and your favorite feed reader; you can create some worthwhile tools that help your trend seeking abilities.

Here is an example I’ve been working on as part of a wider range of investment ideas on Oil.

There are choices on the page to make this search into an RSS feed. Clicking a link on the page will create a feed url in either RSS 2.0 or Atom. You can then take that feed and do further refining in Yahoo! Pipes. I like to create a broad search from Google News and then apply a layer of filters in Pipes for key terms that I think are important. Once I have configured Pipes to my liking it becomes a feed for my RSS reader. I also created a pipe that looks at the opinion and editorial feeds from certain newspapers. Those in the analyst community will recognize this technique as a kin to using Google alerts. Using RSS is the better mousetrap and it doesn’t clog you mailbox.

For years, I’ve used Google Alerts as a way of keeping track of myself online. If my name is mentioned in a blog or if this column appears on the Web, such as on the site of a newspaper that syndicates it, a Google Alert sends me an email about it. Google Alerts can work for you to find a variety of things, such as telling you if a video of a favorite band popped up online or that a blogger posted something about last night’s episode of “Mad Men.”

In about a month, Google will begin delivering these alerts to users via feeds, as well as emails. Google certainly isn’t alone in the alerts arena, as Yahoo, Microsoft and AOL are also players. This week I tried two small companies that recently joined the mission to help users find the Web content using alerts.

I tried Alerts.com and Yotify.com, and found worthwhile features in both. While Google Alerts does a good job of finding search terms in news, blogs and videos, Alerts.com and Yotify use forms that are a cinch to fill out and let you pinpoint your searches.

I do believe, however, that we are going to move toward a more personalized and satisfying user experience within the next decade. After all, we went from Web 1.0 in 1995 to Web 2.0 in 2005. Only three years into Web 2.0, perhaps it is natural that we stay here and fine-tune for another five to seven years, before the real breakthrough innovations can come about and usher in Web 3.0.

E-MAIL has become the bane of some people’s professional lives. Michael Arrington, the founder of TechCrunch, a blog covering new Internet companies, last month stared balefully at his inbox, with 2,433 unread e-mail messages, not counting 721 messages awaiting his attention in Facebook. Mr. Arrington might be tempted to purge his inbox and start afresh – the phrase “e-mail bankruptcy” has been with us since at least 2002. But he declares e-mail bankruptcy regularly, to no avail. New messages swiftly replace those that are deleted unread.
When Mr. Arrington wrote a post about the persistent problem of e-mail overload and the opportunity for an entrepreneur to devise a solution, almost 200 comments were posted within two days. Some start-up companies were mentioned favorably, like ClearContext (sorts Outlohok inbox messages by imputed importance), Xobni (offers a full communications history within Outlook for every sender, as well as very fast searching), Boxbe (restricts incoming e-mail if the sender is not known), and RapidReader (displays e-mail messages, a single word at a time, for accelerated reading speeds that can reach up to 950 words a minute). But none of these services really eliminates the problem of e-mail overload because none helps us prepare replies. And a recurring theme in many comments was that Mr. Arrington was blind to the simplest solution: a secretary.

When Mr. Arrington wrote a post about the persistent problem of e-mail overload and the opportunity for an entrepreneur to devise a solution, almost 200 comments were posted within two days. Some start-up companies were mentioned favorably, like ClearContext (sorts Outlook inbox messages by imputed importance), Xobni (offers a full communications history within Outlook for every sender, as well as very fast searching), Boxbe (restricts incoming e-mail if the sender is not known), and RapidReader (displays e-mail messages, a single word at a time, for accelerated reading speeds that can reach up to 950 words a minute).

E-mail overload is the leading cause of preventable productivity loss in organizations today. Basex Research recently estimated that businesses lose $650 billion annually in productivity due to unnecessary e-mail interruptions. And the average number of corporate e-mails sent and received per person per day are expected to reach over 228 by 2010.

“Information Overload: We Have Met the Enemy and He is Us,” authored by Basex analysts Jonathan B. Spira and David M. Goldes and released Dec. 19, claims that interruptions from phone calls, e-mails and instant messages eat up 28 percent of a knowledge worker’s work day, resulting in 28 billion hours of lost productivity a year. The $588 billion figure assumes a salary of $21 per hour for knowledge workers.
The addition of new collaboration layers force the technologies into untenable competitive positions, with phone calls, e-mails, instant messaging and blog-reading all vying for workers’ time.
For example, a user who has started relying on instant messaging to communicate may not comb through his or her e-mail with the same diligence. Or, a workgroup may add a wiki to communicate with coworkers, adding another layer of collaboration and therefore another interruption source that takes users away from their primary tasks.
Beyond the interruptions and competitive pressure, the different modes of collaboration have created more locations through which people can store data. This makes it harder for users to find information, prompting users to “reinvent the wheel because information cannot be found,” Basex said.
Basex’ conclusion is that the more information we have, the more we generate, making it harder to manage.

If a technological or biological weapon were devised that could render tens of thousands of Defense Department knowledge workers incapable of focusing their attention on cognitive tasks for more than 10 minutes at a time, joint military doctrine would clearly define the weapon as a threat to national security.
Indeed, according to the principles of network attack under Joint Publication 3-13, “Information Operations (IO),” anything that degrades or denies information or the way information is processed and acted upon constitutes an IO threat. That same publication cautions military leaders to be ever-vigilant in protecting against evolving technologically based threats. Yet throughout the Defense Department and the federal government, the inefficient and undisciplined use of technology by the very people technology was supposed to benefit is degrading the quality of decision-making and hobbling the cognitive dimension of the information environment.
We all receive too much e-mail. According to the Radacati Research Group, roughly 541 million knowledge workers worldwide rely on e-mail to conduct business, with corporate users sending and receiving an average of 133 messages per day – and rising. While no open-source studies address how the Defense Department’s e-mail volume compares to corporate users’, my own anecdotal experience and that of legions of colleagues suggests a striking similarity. Without fail, they report struggling every day to keep up with an e-mail inbox bloated with either poorly organized slivers of useful data points that must be sifted like needles from stacks of nonvalue-adding informational hay or messages that are completely unrelated to any mission-furthering purpose.
E-mail is a poor tool for communicating complex ideas. Text-only communication, or “lean media,” as it is referred to by researchers who study the comparatively new field of computer mediated communication, lacks the nonverbal cues, such as facial expression, body language, vocal tone and tempo, that inform richer means of communication. Moreover, aside from its qualitative shortcomings and viral-like reproductive capacity, a growing body of research suggests e-mail’s interruptive nature is perhaps the most pressing threat to decision-making in the cognitive dimension.

Personalization isn’t only coming, it’s here. Sign in to your Google account and you can activate it. Prepare to be underwhelmed. But even if it were as Carrasco describes, privacy concerns would stop personalized search from being adopted until the benefits were undeniable. It would take a radical shift.

When Google came along, it provided something that had never been seen before: good search results. Unlike all the other search engines, Google’s top few slots had what we were looking for. And it provided them fast.

It was a much easier time to make big changes. Someone has to make us realize that Google’s results are as antiquated as Yahoo and Excite were in the late 90s. A change in interface might be the most likely innovation.

Sphere which was acquired by AOL News displays articles “related” to the content of the page currently viewed by the user, and now powers over 100,000 sites including many major news outlets like Wall Street Journal, Time, Reuters, etc. This is also the back-end service (see content widget) used to generate “possibly related posts” on WordPress.

“We founded Sphere with a mission to make contextually relevant connections between all forms of content (mainstream media articles, archived articles, videos, blogs, photos, ads) that enable the reader to go deep on topics of interest,” wrote Conrad.

At the time of its acquisition, Sphere reaches a large number of webpages

Sphere’s third-party network includes more than 50,000 content publishers and blogs and is live on an average of more than 2 billion article pages across the web every month.*

The way Sphere works is a combination of many tracks. Lets use an example say of what else, Broadband. The look for blogs that write about broadband, (including those with broadband in the title of the blog) to create a short list. If I am linking to someone who is also a broadband blogger, and vice-versa, Sphere puts a lot of value on that relationship. The fact is most of us broadband bloggers tend to debate with each others. Think Blog Rank, Instead of Google’s Page Rank. The company has also taken a few steps to out-smart the spammers, and tend to push what seems like spam-blog way down the page. Not censuring but bringing up relevant content first. They have pronoun checker. Too many I’s could mean a personal blog, with less focused information. That has an impact on how the results show up on the page.

It pays attention to the ecology of relationships between blogs, for example, and it gives a higher weighted value to links that have more authority. This will insure, for example, that when a Searchblog author goes off topic and rants about, say, Jet Blue, that that author’s rant will probably not rank as high for “Jet Blue” as would a reputable blogger who regularly writes about travel, even if that Searchblog author has a lot of high-PageRank links into his site. Sphere also looks at metadata about a blog to inform its ranking – how often does the author post, how long are the posts, how many links on average does a post get? Sphere surfaces this information in its UI, I have to say, it was something to see that each Searchblog post gets an average of 21 links to it. Cool!

Last week I was on vacation without Internet access. Now that REALLY slowed down my infomaniac impulses! I had to settle for flipping through the stack of magazines that I had brought with me, and reading the ebooks that I had downloaded previously and stored on my hard drive. (Thank God I had thought ahead!)

While on vacation, I did have time to think about how much time I did waste on aimless browsing and unfocused research. So I created a list of changes that I’m going to implement this week. That includes:

removing any Google alerts that have not provided themselves useful over the past 6 months

deleting any RSS feeds that have not added to my knowledge or imagination

hitting “unsubscribe” to ezines that I don’t really read

using a timer to keep my Internet rovings to 15 minutes (unfortunately, it has a 7 minute snooze button)

According to presidential candidate Obama, 50 tons of loosely guarded highly enriched uranium (HEU) remains in 40 countries around the world and he wants to negotiate agreements to eliminate them in four years. A laudable goal, and how will we go about doing this? Will all these other countries willingly give up their HEU stocks? Will they decommission or convert their HEU-based operating reactors?

In this 1-page proposal, I explore how a single, fixed-price for HEU might solve the problem once and for all.

“If we can spend the early decades of the 21st century finding approaches that meet the needs of the poor in ways that generate profits for business, we will have found a sustainable way to reduce poverty in the world,” Mr. Gates plans to say….

To a degree, Mr. Gates’s speech is an answer to critics of rich-country efforts to help the poor. One perennial critic is Mr. Easterly, the New York University professor, whose 2006 book, “The White Man’s Burden,” found little evidence of benefit from the $2.3 trillion given in foreign aid over the past five decades.

Mr. Gates said he hated the book. His feelings surfaced in January 2007 during a Davos panel discussion with Mr. Easterly, Liberian President Ellen Johnson Sirleaf and then-World Bank chief Paul Wolfowitz. To a packed room of Davos attendees, Mr. Easterly noted that all the aid given to Africa over the years has failed to stimulate economic growth on the continent. Mr. Gates, his voice rising, snapped back that there are measures of success other than economic growth — such as rising literacy rates or lives saved through smallpox vaccines. “I don’t promise that when a kid lives it will cause a GNP increase,” he quipped. “I think life has value.”

Brushing off Mr. Gates’s comments, Mr. Easterly responds, “The vested interests in aid are so powerful they resist change and they ignore criticism. It is so good to try to help the poor but there is this feeling that [philanthropists] should be immune from criticism.”

Easterly is former research economist at the World Bank now at NYU. in his book he looks at the successes/ failures of international aid interventions (financial + military) by “The West” and makes the case that they have done more harm than good during the past 50+ years.

Most of Easterly’s book makes sense to me and I agree with Easterly that philanthropic/aid agencies are not “above” criticism – their hyped up expectations do not necessarily make things better and sometimes they make things worse by standing in the way of more realistic, lasting solutions… but,

I agree with Gates on one thing, that you can get into trouble measuring national economic development using aggregate GDP (growth) instead of measuring the purchasing power of the bottom pyramid (half or quarter) of earners in the economy. Easterly cites India as a success story of development showing a chart of exponential GDP growth over 20-40 years using the Indian IT industry as an example. Despite this progress, the failure is that 50 years after Indian independence close to half of all Indians (400-500 million people) still live on less than $1 of $2 per day.

Easterly says new (niche) market creation is limited by social and legal barriers to trust and property rights and therefore must take place indigenously – there is not much we can do about it living in “The West.” I think we have not yet explored the potential of the Internet to overcome these constraints and help diversify agricultural economies (long tail). See my essay “Opening Niche Markets in Rural India using the Internet”

Style. Though his analysis is very compelling and data-driven with graphs, stories of people, case studies of developing nations, and world history, to me the title seems a bit polarizing or stuck in the past and the tone of the writing is funny bit also feels a bit sarcastic. Perhaps it is discouraging to visionaries and optimists who want to break from the past. In his book he takes aim at Bono and Jeffery Sachs’ “The End of Poverty.”

I tried to capture the main ideas of the book… sure I missed something but I think it’s mostly here.

— Top-down “planners” at large institutions like the World Bank will mobilize resources on the basis of utopian agendas and large-scale “big pushes” that attract donor governments and private institutions (in US, UK, and The West). These visions are never achieved because they lack feedback from the people (Africa, Asia, and “The Rest”) whom they are intended to benefit, i.e. the poor. On the other hand, he notes that the World Bank produces very high quality economic research.

— Unlike market-driven firms or (legitimately) elected officials the planners are accountable to donors, not the poor. planners’ jobs are not dependent on serving the poor but rather to indulge donors’ unrealistic expectations which may never materialize. The “planners” efforts do more harm than good (large part of what the book is about). The failures of these big pushes become self-fulfilling as donors redouble their efforts, bureaucracy becomes bloated, and they begin to measure progress based on volume of aid disbursed not impact on the poor. incentives of planners and poor people are not sufficiently aligned – this is called the principal-agent problem

— Bottom-up “searchers” (NGOs, entrepreneurs, profit-seeking companies) who are on “the ground” in developing countries can get direct feedback from the poor people they serve and make real impact on the their lives. they set realistic, achievable goals unlike the planners. Too little money is going to support the searchers. several case studies.

— On microfinance and microcredit,

Microcredit is not a panacea for poverty reduction that some have made it out to be after Yunis’ discovery. Some disillusionment with microcredit has already come in response to these blown-up expectations. Microcredit didn’t solve everything; it just solved one particular problem under on particular set of circumstances-the poor’s lack of access to credit except at usurious rates from moneylenders.

— Markets are a spontaneous outgrowth of social trust (for transactions) and property rights (for investment), and can’t be planned by aid agencies, foreign governments, or “out of the blue” after an invasion or removal of a dictator.

— Foreign aid has been most effective and made a large-scale impact in people’s lives for things like vaccination, health care delivery, and programs to keep kids/girls in school when compared to other areas in which results can’t be directly measured. dollar for dollar, the recent momentum to offer AIDS treatment ($1000 per person) like Bush’s $15 billion commitment of US taxpayer funds for Africa (30 million infected) is many times less cost-effective compared to preventing the spread of AIDS through condoms (600 million not infected) or even prevention of other life-threatening diseases like malaria, diarrhea, and infant mortality. the spread of AIDS could have been avoided had prevention been a bigger priority since experts have been predicting this epidemic for over a few decades. In AIDS, saving a life gets more “emotional” attention from the public than prevention of AIDS transmission which could save many more lives.

— There have been some success stories, but economic growth in developing countries has not been correlated to aid/intervention by the West. Colonialism and imperialism has resulted in long-term economic stagnation, which he offers as a case study to consider other neo-imperialistic plans to take over weak-states. His claim is that countries develop much faster and better when they are left to their own.

— National financial health has less direct impact on earnings of the poorest people, except indirectly via inflation and government subsidies to the poor.

The IMF’s approach is simple. A poor country runs out of money when its central bank runs out of dollars. The central bank needs an adequate supply of dollars for two reasons. First, so that residents of the poor country who want to buy foreign goods can change their domestic money (let’s call it pesos) into dollars. Second, so those poor-country residents, firms, or governments who owe money to foreigners can change their pesos into dollars with which to make debt repayments to their foreign creditors. What makes the central bank run out of dollars? The central bank not only holds the nation’s official supply of dollars (foreign exchange reserves), it also makes loans to the government [aside from foreign borrowing with bonds] and supplies the domestic currency for the nation’s economy. The government spends the currency [it borrows], and the pesos pass into the hands of people throughout the economy. But are people willing to hold the currency? The printing of more currency [excessive government borrowing from the Central Bank] drives down the value of currency if people spend it on the existing amount of goods – too much currency chasing too few goods… so they take the pesos back and exchange them for dollars. The effect of printing more currency that people don’t want is to run down the central bank’s dollar holdings. Too few dollars for the outstanding stock of pesos is kind of like the Titanic with too few lifeboats. The country then calls on the IMF. So the standard IMF prescription is to force contraction of central bank credit the government, which requires a reduction in the government’s budget deficit [government spending]… forces the government to do unpopular things [like cut subsidies] – disturbance of domestic politics.

— Bad governments (corruption and violent dictators) have been responsible for much of the slow growth in these countries, which are in turn caused by either a colonial past or by historical poverty itself. Foreign aid tends to prop these governments up, and in some cases private organizations working around these governments can lead to much better results.

— Loans are not necessary to balance a national budget, and the IMF’s prescriptions for foreign exchange lending to developing countries and reducing government spending can be way off. This is due to severe accounting irregularities in the books of these countries, uncertainty of how or when markets react to falling currency prices, and how they react to information in the economy (people’s behavior). Lending based on shaky foundations can lead to the self-reinforcing “debt trap” through repeated refinancing of poor countries and propping up of bad governments. the IMF does better in emerging markets, but he says it may be better off to leave the poorest countries alone.

Corning, which went public in 1945 and has a market capitalization of about $36 billion, has survived — and often thrived — in recent decades by following a playbook that Wall Street and corporate America deems outmoded. While companies like Xerox Corp. scaled back long-term research, Corning stuck with the old formula, preferring to develop novel technologies than buy them from start-ups.

An investment 25 years ago has turned Corning into the world’s largest maker of liquid-crystal-display glass used in flat-panel TVs and computers. But another wager, which made it the biggest producer of optical fiber during the 1990s, almost sank the company when the tech boom turned into a bust.

Corning Inc. has survived for 157 years by betting big on new technologies, from ruby-colored railroad signals to fiber-optic cable to flat-panel TVs. And now the glass and ceramics manufacturer is making its biggest research bet ever.

Under pressure to find its next hit, the company has spent half a billion dollars — its biggest wager yet — that tougher regulations in the U.S., Europe and Japan will boost demand for its emissions filters for diesel cars and trucks.

In Erwin, a few miles from the company’s headquarters in Corning, the glassmaker is spending $300 million to expand its research labs. There, some 1,700 scientists work on hundreds of speculative projects, from next-generation lasers to optical sensors that could speed the discovery of drugs.

Corning’s roots go back to 1851, when Amory Houghton, a 38-year-old merchant, bought a stake in a small glass company, Cate & Phillips. For most of Corning’s history, a Houghton was either chairman or chief executive. Even today, Corning, population 12,000, is very much a company town. The original Houghton family mansion, still used for company meetings, overlooks the quaint downtown, which is punctuated by a white tower from one of Corning’s original glass factories. Most senior managers have spent their entire careers at Corning.

“Culturally, they’re not afraid to invest and lose money for many years,” says UBS analyst Nikos Theodosopoulos. “That style is not American any more.”

Corning also goes against the grain in manufacturing. While it has joined the pack in moving most of its production overseas, it eschews outsourcing and continues to own and operate the 50 factories that churn out thousands of its different products.

Corning argues that retaining control of research and manufacturing is both a competitive advantage and a form of risk management. Its strategy is to keep an array of products in the pipeline and, once a market develops, to build factories to quickly produce in volumes that keep rivals from gaining traction.

But because Corning often depends heavily on a single product line for most of its profit — 92% of last year’s $2.2 billion profit came from its flat-panel-display business — it is vulnerable to downturns. Even small movements in consumer demand for or pricing of its LCD-based products can cause gyrations in its stock price. During the dot-com meltdown when the market for fiber-optic cable crashed, Corning was brought to the brink of bankruptcy and by 2003 was forced to lay off half of its workers. Today it has 25,000 employees.

“…by one international measure, Finnish teenagers are among the smartest in the world. They earned some of the top scores by 15-year-old students who were tested in 57 countries. American teens finished among the world’s C students even as U.S. educators piled on more homework, standards and rules. Finnish youth, like their U.S. counterparts, also waste hours online. They dye their hair, love sarcasm and listen to rap and heavy metal. But by ninth grade they’re way ahead in math, science and reading — on track to keeping Finns among the world’s most productive workers…

The academic prowess of Finland’s students has lured educators from more than 50 countries in recent years to learn the country’s secret, including an official from the U.S. Department of Education. What they find is simple but not easy: well-trained teachers and responsible children.Early on, kids do a lot without adults hovering.And teachers create lessons to fit their students.

In addition to the article, also see the video linked from the WSJ website. The international comparison/test was for 15 year olds in math, science, and reading skills. The video explains that Finnish students don’t start school until age 7 allowing them to play, be free, and develop emotionally for a longer time than their American counterparts who start first grade at age 5.

Yesterday I went to the talk by Premal Shah about www.kiva.org which he calls the “Ebay for microfinance.” It was at Zerox PARC. Thanks to Chari for giving me a heads up about this. They get lots of press and also see a recent NY Times article about them.

Kiva is a nonprofit website and clearing house that enables Internet users (lenders) to give 0% (interest-free) loans to specific individual or small-group “entrepreneurs” (borrowers) in developing countries in Africa, South America, Eastern Europe, and Asia. They work with 85 field partners from 40 countries, called microfinance institutions (MFI), who post/vet entrepreneur profiles which are in turn selected by Internet users for personal loans. One of the biggest advantages of Kiva is end-to-end transparency — each lender can “see” who which borrower they are lending to, track their progress through journal updates, and see when the loan is being repayed. See a blog post by Guy Kawasaki that explains their fee per transaction business model — $2.50 voluntary fee that lenders pay when checking out their “shopping cart.”

Yunis was first 30 years ago, and today there are 10,000 MFIs worldwide. He estimates there are 500M people needing loans like this, and only 100M have been reached through traditional microfinance to-date. Access to capital is still a bottleneck he says. Note: Kiva is prevented from operating in India due to their bank regulations.

Kiva Statistics. Kiva is 3 years old, so far $20M loaned by Q1 08 in 3 years since inception,. Observing parabolic quarter on quarter growth in loans and expect to have loaned $250M to $1B in five years surpassing efforts of microfinance initiatives major banks like Citibank ($100M).

The Kiva model:

Internet user (a lender, social investor)

–> Kiva (online marketplace)

–> Local Field Partner (Microfinance Institution)

–> Developing World Entrepreneur (the borrower)

Each Kiva lender has given on average 2.2 loans — $25 max limited by Kiva. Over the 20-30,000 people visit the site daily, and 3% end up giving loans. Each borrower is usually funded by 15-20 lenders on average — typically in the range of $1000 total loan size. The lenders usually sign up within a day of a loan request being posted for an entrepreneur. In some war torn or crisis areas like Iraq of Afghanistan, the lenders sign up in just a few hours.

The lenders (social investor) rationale is that Kiva is transparent (know where it goes), sustainable (if repaid, money can be lent to someone else), affordable ($25 to change someone’s life), and unique (“I love microfinance, I want to participate”). For the microfinance institution, Kiva offers a low interest US dollar capital, no liability, flexible repayment terms, and financial assistance + incentives for transparency.100% of loan funds go directly to the borrowers.

Kiva checks out (verify) the MFIs and the MFIs in turn check out the borrowers. The MFI charges 20% interest rate to cover distribution costs, and they bear currency ris as well when converting from US dollars to local currency and back during repayment period. In the future Premal hopes they can establish credit histories for individuals and bypass this intermediate layer completely. Kiva uses random sampling to audit and check on the MFIs for fraud.

Unlike traditional microfinance where borrowers are organized into groups who are accountable to each other, social accountability is created via the Internet. Borrowers’ profiles are made visible on the Internet, and is therefore visible online to locals who watch each other at Internet kiosks/cafes.

Capital. Internet users are willing to bear greater risk than banks probably due to the “personal connection.” People don’t want (need) to lend with interest, whereas banks have to when they are working with MFIs.

Operating Principles.“Unreliable credit is OK, but unreliable data is not OK.” Each lender gets a “portfolio” of people who they lend to, much like a stock portfolio on Etrade. Loans in this model create a “persistent tie” between the people across the world with journal updates. The accountability is simple. If you are getting repaid, something is working. If you don’t get repayed, something is not working. MFIs report repayment rates of 99.7% but Premal believes that’s skewed to report better than actual results because the MFIs don’t want to discourage Kiva. He believes the actual repayment rate is over 90%. There is a “Risk & Due Diligence Center” on the website.

Diversification. I asked what types of different activities and market opportunities are being funded, epecially outside agriculture. He said he doesn’t know for sure, but says that agricultural opportunities are the dominant activity being funded. In addition, there appears to be a slight lender bias. For example, the most popular kind of loan that gets funded by lenders is African female farmer, and the least popular is an Eastern European male taxi driver. He questions whether or not that is economically rational, but says that’s what it is right now. In 3Q 2008 they plan to open up APIs so researchers can download and analyze the loan data from their website to gain further insights.

Organization. Kiva.org is 25 people and operates on very low overhead thanks to cooperation and donations from several silicon valley companies. For example PayPal gives free payment processing. Google offers free AdWords traffic.

Along with a group of MIT alums, one week ago (1/22/2008) I was fortunate to visit the automotive manufacturing plant in Fremont, CA that turns out the Toyota Corrolla, Pontiac Vibe, and Toyota Tacoma pickup truck. The NUMMI auto plant is a joint venture of GM and Toyota. In North America, for Toyota it is the most efficient plant taking 19 hours of human effort per vehicle produced and seventh overall (GM and Honda have more efficient plants) — see the article “Most efficient assembly plants” in Automotive News. In 2007 NUMMI produced 407,881 vehicles — see “There’s a new No. 1 plant: Georgetown.” I have blogged about NUMMI’s high-trust workplace, and here are some notes that another MIT alum put together in 2003.

The first thing I noticed at the entrance to the plant was a rug on the ground titled “Safety Absolutes”

What struck me was message of social accountability and interdependence being conveyed in both the rug and mission statement.

The plant is 5.5 million square feet (118 football fields or 122 Costcos), and workplace for 5,000 “team members” (they didn’t say employees). There are also 300 temporary workers who come in to help for seasonal variations in production.

The emphasis on relationship with team members and “community” comes out at every turn in the plant. The plant has had no layoffs in its 23 year history of operation. Wages start at $20/hr and go up to $35/hr in three years. The plant has 160 “team rooms” with refrigerators, lunchrooms, and lockers. Phrases I heard included “quality, pride, teamwork, job security, benefits, pay, family, successful year, looking out for my family, winning team, all about the family.”

There are five stages (divisions) to auto manufacturing at NUMMI.

Stamping steel into body sheets — 1 million lbs of steel / day

Body / welding

Paint

Plastics

Assembly — the assembly line is 1.5 miles long producing 650 trucks / day and 900 cars / day. There hours per truck, one produced every 85 seconds.

Quality control involves random test drives and audits at the end of the production line. The quality philosophy really starts with the team members who are trusted with the authority to push a button that will raise an alert to stop the assembly line if they find a problem — an innovation from Japanese lean manufacturing. There was a time when auto plants did not allow their employees to do this. Once they raise the alert, the a red light goes on and they have 81 seconds to decide to clear the alert the before the line actually stops. Sometimes it gets cleared up within that time, and other times the line has to stop to fix the problem. After reading about it before coming to the plant, I was really curious and I actually saw it stop a few times. The line statistics are prominently displayed for team members to view on a scoreboard. They were reporting 2% downtime and the target is to remain less than 4%.

The NUMMI team members work in teams of 4-6 people, and they rotate their jobs throughout the day whenever they want to — this eliminates most of the repetitiveness and boredom usually associated with manufacturing. They usually spend 1 year in a division (like plastics) before moving onto other types of jobs in the plant, so that way employees learn about all aspects of manufacturing/production. Team members are encouraged to find ways to improve the process and implement these ideas. Using a “frame rotator,” the truck chassis were flipped upside down to aid team member ergonomics during assembly of the drivetrain. We saw several robots by Kawasaki, and automatically routed (guided) vehicles to transport auto parts.

If an auto plant can do this, what would it look like if we incorporated this philosophy into software development? Software programmers and test engineers would have the authority to raise alerts and hold up software releases instead of a manger having the final say in triage of bug reviews. People would rotate between software and test. What if the space shuttle launch could be delayed by any engineer on the team instead of being determined by launch managers? To make this work all engineers have to have sufficient system level knowledge and be “trusted” with the authority to make these decisions.

There is a hierarchy,

skilled worker — several of whom are organized into quality circles

team leader of 4-5 workers (tends to be nurturing)

group leader of several teams (tends to be more disciplined)

Only the teams are evaluated for performance, not individuals. People can get fired, but they can’t get layed off. The plant operates in two 7.5 hour shifts for a total of 15 hours per day, five days a week so people have weekends off. According to the tour guide, job rotation was the big thing that drew employees to the plant, not work or guaranteed employment.

The big difference that comes out here is the relationship with team members and that trust turns into better results.

On October 8, 2006 in San Carlos, CA at the Hillier Aviation Museum, I had the good fortune of listening to Burt Rutan speak about breakthrough innovation, aviation, spaceflight, and aviation safety—totally inspiring. Some of it is very big picture, but here are a few of the highlights,

Technical progress and the ability to take big risks has been what sets humans apart from animals

Children make the decision to be innovators during the age of 3-14, usually due to some events that occur during that period.

Most of the aviation pioneers that people recall (Von Braun, etc.) were growing up during the time when the airplane was invented early 1900. Most aviation since then has used the same basic principles they discovered

Next major wave of innovation occurred WWII and after when he was growing up.

Third wave was when Sputnik made Americans feel they lost to the Russians, which kick started the space race of 1960s.

Since then, we have been using essentially the same space technology for last 30 years.

His SpaceShip One is now housed in the Smithsonian right next to the other major plans of the last century

Commercial & military airplanes have stagnated in their altitude and speed because the technologies have not been pushed by the organizations that develop them. Space Ship One pushes the envelope by orders of magnitude, representing the next wave of innovation in aviation and space travel. Space Ship Two will be for commercial travel—Virgin airlines may be accepting orders for the first flights.

Safety and stability have been the barriers to entry in commercial space flight—that’s what he’s out to change. One of his first jobs out of college was to understand why the F4 had so many failed flights and engineer a stability control system.

Below is a photo of him telling me what he thinks about the Columbia accident report: “if you read it carefully, what they are saying is not to take risks. NASA as an organization will never take risks.” Also asked him what he thinks is the difference between his small 130 person company and NASA is. He replied that he never puts his engineers and factory personnel in the position of defending safety, i.e. never to be in a defensive position, or allow an aviation regulator do that to them.

I read this to mean this places full responsibility in the people doing the work to ensure safety.

Safety has to be so obvious to the people doing the work that there is never a need to be defensive—they understand exactly why their aircraft is safe.

My interpretation is that this nurtures a culture which outperforms regulated safety—he claims he has built some 40 research aircraft with an excellent safety record he claims

The article “How Technology Almost Lost the War: In Iraq, the Critical Networks Are Social – Not Electronic” is very longwinded but informative in the end. The wars of Iraq and Afghanistan illustrate that using network-centric warfare, the US military has demonstrated its mastery/superiority of identifying and destroying targets anywhere on earth and invading nations using conventional means with both small numbers of troops and minimal troop losses compared to earlier wars. As the battlefield shifts from invasion to nation building, it looks like the wars of Iraq and Afghanistan are forcing the military to learn a completely modern version of fighting Internet-enabled insurgencies amidst civilian populations (COIN/HTT) – a capability it lacked going into Iraq, and one of the reasons GHW Bush apparently stopped short of removing Saddam in 1991. The next war of preemption whenever it happens may turn out a lot different if the Pentagon plans ahead.

Blaming the protracted war in Iraq to insufficient troop numbers or technology is a bit simplistic. A recent documentary “No End In Sight” was filmed pre-surge and so it did not incorporate the progress/changes as a result of Patreus’ new strategy. However it points out that post-invasion planning for Europe during WWII began two years before the invasion, and in case of Iraq the DoD planning apparatus for post-invasion was almost nonexistent even though DoD was placed in charge – excluding the State Department. This wired article doesn’t quite address the strategic planning failures at the pentagon pre-invasion that might have contributed to some of the Iraqi security problems we see today. The piece “Who Lost Iraq?” in Foreign Affairs tries to expose that more clearly.

In failures of postwar planning, the root issue is local institutions and (absence of) social capital needed to build them. There is a book (2006) called “Nation Building beyond Iraq and Afghanistan”. I read the introductory chapter by Fukuyama “Nation Building and the Failure of Institutional Memory” (pages 1-18) which explains how politics between the defense/state departments got in the way of post-war planning, and it tends to corroborate the picture painted by interviews in the documentary “No End In Sight.”

“the frequency and intensity of US and international nation-building have increased since the end of the Cold War…there has been roughly one new nation building intervention every two years since the end of the Cold War… What is remarkable is how little institutional learning there has been over time; the same lessons about pitfalls and limitations of nation-building seemingly have to be relearned with each new involvement. This became painfully evident after the American occupation and reconstruction of Iraq after April 2003.”

“Nation-building encompasses two different types of activities, reconstruction and development. Although the distinction between the two is often blurred, it was always present to nation-builders and earlier generations dealing with post-conflict situations. The official title of the World Bank is, after all, the International Bank of Reconstruction and Development, and most of its activities fell under the first heading. Reconstruction refers to the restoration of war-torn or damaged societies to their preconflict situation. Development, however, refers to the creation of new institutions and the promotion of sustained economic growth, events that transform the society open-endedly into something that it has not been previously… Development, however, is much more problematic, both conceptually and as a matter of pragmatic policy. The development phase by contrast requires the eventual weaning of local actors and institutions from dependence on outside aid… it is seldom the case that local institutions are actually strong enough to do all the things they are intended to do.”

After reading Fukuyama’s book “Trust” (1996) where he explains the role of social capital in the world’s economies (in addition to intellectual capital, financial capital, or natural resources), the lack of social capital in these nation-building efforts explains why completely new local institutions in Iraq and Afghanistan take decades to gel and become productive.

This week we got to see the founder of Grameen Bank, Mohammed Yunnis speak in person at the Fairmont Hotel in downtown SF (Commonwealth Club). He pioneered a new form of banking that can be called “trust-based” or “community-based” lending, which is contrast to the worldwide banking system based on “collateral” and “credit-histories.” For this he won the Nobel Peace Prize in 2006. See this FAQ http://www.grameen-info.org/bank/GBGlance.htm

In 1976, he was a economics professor an Dhaka university where he found the theories he was teaching had little practical applicability to the poverty he was seeing outside the university. So he decided to do what he could. He lent small amounts of money (~$500) to poor people and they became very happy. Then he asked, why doesn’t the bank do this?

For months he struggled to get banks to lend to the poor, but found them to be uncooperative. Since poor people didn’t have credit histories + collateral the banks were unwilling to lend money. His insight was to turn those assumptions on their head by lending to people with no money and history of taking loans. He sought out poor women who were afraid of taking money, and tells his employees that those are the people they should loan to.

No lawyers. By challenging all the “assumptions” he came up with something completely new. He said that the current banking system has lots of money and is setup to loan large amounts of money to people who already have money. That architecture doesn’t scale down to vast majority of people who need it. Using the analogy of ships, he said the modern banking system is like a large supertanker. For poor people you need a system more like a dinghy boat, and if you scale down a supertanker architecture to that size it would sink.

Grameen Bank does not require any collateral against its micro-loans. Since the bank does not wish to take any borrower to the court of law in case of non-repayment, it does not require the borrowers to sign any legal instrument.

Although each borrower must belong to a five-member group, the group is not required to give any guarantee for a loan to its member. Repayment responsibility solely rests on the individual borrower, while the group and the centre oversee that everyone behaves in a responsible way and none gets into repayment problem. There is no form of joint liability, i.e. group members are not responsible to pay on behalf of a defaulting member.

Social business. Yunis described the concept of “social business” intended simply to help people, which can exist alongside profit-maximizing businesses and often work in synergy. Related, see January 24 front page article on Gates speech at Davos, “Bill Gates Issues Call For Kinder Capitalism”

“If we can spend the early decades of the 21st century finding approaches that meet the needs of the poor in ways that generate profits for business, we will have found a sustainable way to reduce poverty in the world,” Mr. Gates plans to say.

Everyone is an entrepreneur. Yunis explained that all people are entrepreneurial by nature, and it is the system that either brings it out in them or not. Grameen Bank has been able to convert tens of thousands of beggars into door-to-door salespeople who earn a living – since they visit those house anyways, why don’t they take along something to sell and make money? It turns out beggars have unique knowledge of when particular people are home, and and household demographics that is useful in selling.

Begging is the last resort for survival for a poor person, unless he/she turns into crime or other forms of illegal activities. Among the beggars there are disabled, blind, and retarded people, as well as old people with ill health. Grameen Bank has taken up a special programme, called Struggling Members Programme, to reach out to the beggars. About 98,500 beggars have already joined the programme. Total amount disbursed stands at Tk. 102.27 million. Of that amount of Tk. 69.74 million has already been paid off

There are many other programs (scholarships, cell phones, loan insurance, etc) described on the website. Some facts,

Total amount of loan disbursed by Grameen Bank is US $ 6.55 billion

Loan recovery rate is 98.35 per cent

Total number of borrowers is 7.34 million, 97 per cent of them are women

Grameen Bank finances 100 per cent of its outstanding loan from its deposits. Over 58 per cent of its deposits come from bank’s own borrowers. Deposits amount to 139 per cent of the outstanding loans

Ever since Grameen Bank came into being, it has made profit every year except in 1983, 1991, and 1992.

Grameen Bank has 2,468 branches. It works in 80,257 villages. Total staff is 24,703.

I expect the Commonwealth Club audio transcript should be available in a few weeks. Here are some funny short clips from where is interviewed on the Daily Show (2006) right after recieving his nobel prize where he uses the phrase “trust-based lending.” He was also interviewed a few weeks ago by Steve Colbert (2008). Speaking to Colbert, Yunis remarks that the current home loan default crisis in the US was caused not by the people who took the money but rather by the banks who didn’t understand how to lend money.