Think you’re immune to Google Search? A new effort by the company promises to unearth your embarrassing Elementary School photos, achievements and other data, then incorporate those into the Google brain.

The Retro-Active Quantification Industry, which I believe will grow to a multi-billion $ valuation by 2015, made a big leap forward this week with the release of Google’s News Archive Search.
Many years in the works, the new service/feature allows users to do exactly what it says – search a huge body of archived small-town newspapers that have been scanned into Google’s system, converted from visual to text data using the company’s perfected system (note: they’re also working on a similar but more robust system that will mine text data – t-shirts, street signs, house #s, etc. – from photographs), and then indexed using Google’s world-famous search.

Best of all, Google allows you to view the original scanned images and “browse through them exactly as they were printed—photographs, headlines, articles, advertisements and all”, much like a microfiche in a library basement (remember those?).

Google Inc, the uncontested leader in Internet services
announced it has shipped its 5 millionth “free” computer, only 14
months after starting up the “Free Computer Program”. The Google
Product Manager, Pierre Lindsely, stated he is overwhelmed by the
success of his project and they are trying very hard to keep up
with demand.

People now have to wait more than three weeks to get their
“G-Tops”, as they have become known as, instead of the three days
when the program started. Pierre Lindsely: “People will wait for
anything if it’s free, so I am not worried that this will impact
the enthusiasm for this product. We are attracting some new
suppliers and we will see the waiting time decrease gradually.” The
free Google computers come with a free broadband connection that
connects only to Google WI-FI hubs (aka as G-Spots). (cont.)

Wouldn’t it be nice to have cheap, high-speed wifi blanketing
the entire United States? You’d be able to access the internet from
anywhere, which would allow you to stream entertainment during long
road trips, keep up-to-date on mass transit arrivals and
departures, fall back on google maps when you become lost, or just
not be tethered to an ethernet cord when you really just want to
watch your kids play in the backyard while doing a bit of
home-work.

Sound appealing? Google thinks so too. And they’ve proposed yet
another solution to make this high-speed internet dream a
reality.

Here’s the plan: The February 2009 conversion of all U.S.
televisions from analog to digital will free-up an extraordinary
amount of
white space (basically, gaps of bandwidth in the previously
saturated television spectrum), that could be used to project
wireless internet signals throughout every home in America
relatively risk-free.

Google’s ex parte filing with the FCC
states that “[t]he unique qualities of the TV white space – unused
spectrum, large amounts of bandwidth, and excellent propagation
characteristics – offer a once-in-a-lifetime opportunity to provide
ubiquitous wireless broadband access to all Americans. In
particular, this spectrum can provide robust infrastructure to
serve the needs of underdeployed rural areas, as well as first
responders and others in the public safety community. Moreover, use
of this spectrum will enable much-needed competition to the
incumbent broadband service providers.”

Sound like a win-win for everyone. So what’s the problem? TV
broadcasters, wireless phone manufacturers, and even the
NFL are worried that utilizing this white
space will interfere with their programming, service or wireless
devices. Google argues that this would not be a problem due to
low-cost “spectrum sensing” which would prevent signals from being
crossed.

Nothing gets humans up in arms like a new technology. Will it cure our ills and save us from destruction? Or end the world in one cataclysmic Earth-shattering moment? Clearly, no invention has accomplished either, but try telling that to the fanatical, hysterical or just plain irrational among us. Now, with technology advancing at an ever quickening pace, rational thinking is in short supply. Here then, to prove this point, are eight of the biggest freak-out moments in technology history:

Writing Will Make us Forget – Socrates

The written word and the ability to understand it is considered one of the most important developments ever achieved by mankind and a defining step for any civilization. But not everyone was always a fan. Even that hero of western philosophy, Socrates, once argued that writing would make people lazy and forgetful!

“The fact is that this invention will produce forgetfulness in
the souls of those who have learned it,” said Socrates, “They will not need to
exercise their memories, being able to rely on what is written,
calling things to mind no longer from within themselves by their
own unaided powers, but under the stimulus of external marks that
are alien to themselves. So it’s not a recipe for memory, but
for reminding, that you have discovered.”

Sound familiar? It is the same argument that some people nowadays are directing at both Google and the World Wide Web.

Given that pretty much every major advancement subsequent to the birth of writing is built on writing itself (collectively we have advanced much faster through the use of writing) it certainly did anything but make people lazy. Forgetful? Perhaps, on an individual level. But I sure am glad Plato broke out his quill to write down Socrates’ teachings, lest I couldn’t “remember” to complain about him now.

Get Out of the Way, Here Comes the Train!

Reportedly, when the Lumiere Brothers showed their films for the first time at the Grand Cafe in Paris in 1895, audience members ran out of the room in a panic. Why? To avoid being hit by the image of a train pulling into a station!

Google has been the Golden City of Silicon Valley and indeed the whole world wide web for the past several years. The savvy start-up that grew from a garage in Menlo Park to one of the biggest companies in the world in less than a decade is not only a business wunderkind, but a cultural icon whose name has become a verb for finding information on the Internet. Yet as Google’s rise to fame attests, the Internet is a fast and fickle place where a good new idea can change everything.

In a recent interviewwith Mad Money host Jim Cramer, Google CEOEric Schmidt said that Google can avoid the flat-line in growth that eventually plagued it’s high tech giant predecessors IBM & Microsoft. Google will accomplish this, Schmidt says, through increasingly targeted advertising, breaking into new businesses and keeping to the mantra of not being “evil.”

Is this a realistic forecast? Can its very size and success be a detriment to Google’s innovation? Can it really conquer new markets? Though the company’s stock has consistently outperformed expectations and grew an impressive 26% last quarter, there are some tell-tale signs that Google’s empire is not immune to the forces of time or economics.

Innovation by Acquisition: By Schmidt’s own admission, Google will need to innovate at a high rate to remain competitive. The company has released several products in the last few years includingGmail, Google Earth, Google Docs (which I am using to type this article), Google Calendar, Knol, and most recently its web browser Chrome. But much, if not the bulk, of the company’s innovation has been generated through acquisitions. While many of the purchases have been a big boon for Google, i.e. DoubleClick is estimated to have brought in $90 million dollars for Google last year, several of the innovative companies acquired have mysteriously entered the ever widening Google black hole. Jaiku, a twitter-like micro-blogging company was purchased in October of 2007 and is still closed to new users. GrandCentral a site the allows you integrate all your phone numbers and voicemail boxes into one account, accessible from the web, had a markedly similar fate. Even Blogger, once the king of blogs, has withered from lack of development and upgrading since being acquired. It now seems doomed to forever live in the shadow of it’s successors Wordpress and Movable Type.

A quick look at this comprehensive list of Google’s acquisitions reveals many great ideas that either are dead in the black hole, being developed by Google, or in use but just not being promoted. It’s hard to say which, but considering how old some of these acquisitions are and how quickly the Internet world moves, even in the best case scenario of “development” Google is proving it simply hasn’t been able integrate and develop it’s acquisitions quickly enough.

Google just
announced that advertisers with slow landing pages (the page
their ad is linked to) will be penalized and see a drop in their
Quality
Score . This will force hundreds of thousands, if not millions,
of advertisers to increase their load time by either decreasing
graphics or moving to a better server.

According to the company, the primary
reason for this is that “users have the best experience when they
don’t have to wait a long time for landing pages to load.”

This makes a great deal of sense and I, for one, will be happy
to experience a smoother web browsing experience.

But what I find really interesting about the new landing page
penalty is that it supports the ongoing evolutionary trend of
compressing time-to-information. As the big web companies continue
to battle for our attention, they will do whatever it takes to
better the experience for the user and keep those eyeballs. The
result is an increasingly faster web and quicker or “greased”
human-info network.

This web acceleration, in turn, accelerates everything else. As
the time it takes to reach information decreases, humans are freed
up to do more/better research, to allocate their saved time to
other processes, or to just kick back and relax for the extra few
minutes per day that this saves them.

Taking another step back, we can see that rewarding distributed
acceleration will be key to sustaining the exponential info and
tech curves that run through our business and social systems. So if
we’re truly in or approaching the knee of the curve, we should
expect to see a steady increase in incentivized acceleration. After
all, the mathematicians running Google have got to be accel-aware,
no?

Is Google hip to accelerating change?

Absolutely. That’s central to their business model.

Yes, but only as a consequence of doing business.

No. They understand Metcalfe’s Law and the value of information networks, but not broader acceleration.

It was big news when Microsoft demonstrated their WorldWide
Telescope software at the TED
conference last month. The software, set to go live this spring,
allows users to explore the wonders of space via a map of digital
images taken by the greatest telescopes around the world.

Then, without much fanfare, Google went ahead and launched
Google Sky
yesterday, an application that, uh, allows users to explore the
wonders of space. Previously only available through Google Earth,
it’s now a freestanding application that can be viewed on its own
Web browser. It’s got some pretty cool features, like viewing
various regions of our universe at different wavelengths (infrared,
microwave, ultraviolet, x-ray), viewing with constellation
overlays, and listening to podcasts about celestial bodies and
upcoming astronomical events.

Erick Schonfeld, asking
Is Keyword Search About to Hit its Breaking Point?, talks about
Spivack’s view of the future of the web. According to him it lies
ever-more-refined search technologies such as semantic search,
natural language search, and artificial intelligence. A quote:

Keyword search engines return haystacks, but what we really
are looking for are the needles . The problem with keyword search
such as Google’s approach is that only highly cited pages make it
into the top results. You get a huge pile of results, but the page
you want—the “needle” you are looking for—may not be highly cited
by other pages and so it does not appear on the first page. This is
because keyword search engines don’t understand your question, they
just find pages that match the words in your question.

Spivack wants to “do for data what the Web did for documents”
and develop a standard, uniform system for semantic metadata. It’s
the classic “dumb software, smart data” idea. Tagging works to a
degree, but it’s neither uniform nor standard — the same tag can
mean two different things for two different people, and two
different tags can mean the same thing.

That said, the premise underpinning Spivack’s whole argument is
that search will is the correct interface when faced with a world
of exponentially-increasing information. His version of the future
says, “Keyword search will become increasingly inefficient and the
solution is to develop semantically-aware systems that search based
on meaning, rather than content.” (cont.)

Google
Earth is the ultimate palette for myriad developers whose
products require geo-spatial context, but its utility and reach has
been capped by the fact that it’s a stand-alone API that exists outside the standard browsing
experience. As of today that’s no longer the case. With the release
of the
new Earth Browser Plug-in Google’s little Hulk), the
future hub and entry point for many of the company’s offerings, has
escaped its cage and is now free to roam the halls of the worldwide
web and look for new friends… millions of them.

In the immediate to short-term, this allows those who
have installed the plugin to embed frames of Google Earth directly
into their web pages and to manipulate and mash objects and
places.

“Driven by an extensive JavaScript API, you can control the camera; create lines,
markers, and polygons; import 3D models from the web and overlay
them anywhere on the planet,” writes Paul Rademacher, Technical
Lead of the Earth Browser Plug-in project, “In fact, you can even
overlay your content over different planets, stars, and galaxies by
toggling Sky mode, letting you build 3D Google Sky mashups. You can
also enable 3D buildings with a single line of JavaScript, attach
JavaScript callbacks to mouse events, fetch KML data from the web, and more.” (cont.)

For all those out there wetting their pants for Google’s new Linux-based phone operating system, Android will be unveiled tomorrow to much hoopla. And while delay after delay has done some damage to the egos of salivating Googlephiles, anticipation is still high.

The reason?

For one thing, nobody likes a monopoly. The iPhone has become the standard when it comes to hip smart phones which for some reeks of domination. The hope is that the Android mobile phone operating system will do some damage to take down the iPhone juggernaut. Although many expect there to be an assortment of bugs since it’s the first release (as well as having choppy graphics), it’s still an attractive alternative for users who don’t want an iPhone or are sick of Windows Mobile (or anything Microsoft).

Secondly, the operating system is based on Linux. Many PC users have been switching to Linux due to problems with the Vista OS. Linux has it’s own culture about it that’s more dedicated than Apple users. They’re fiercely proud of it, it’s free, and anyone can alter it. The idea of a Linux-based mobile phone operating system will be irresistible to any Linux fan.

Google
Australia has just announced the release of a revolutionary new
product called MATE™ (Machine Automated
Temporal Extrapolation) that extrapolates web data up to one full
day in advance of reality.

According to a statement
released by the suddenly resurgent company, “Using MATE’s™ machine learning and artificial intelligence
techniques developed in Google’s Sydney offices, [it is possible
to] construct elements of the future.”

So how exactly does it work?

Google spiders crawl publicly available web information and
our index of historic, cached web content. Using a mashup of
numerous factors such as recurrence plots, fuzzy measure analysis,
online betting odds and the weather forecast from the iGoogle
weather gadget, we can create a sophisticated model of what the
internet will look like 24 hours from now.

The implications are frankly astounding. The ability to predict
information patterns and the statistical likelihood of certain
content is certain to disrupt established patterns of causality
underlying markets, social dynamics and even physical and chemical
reactions.

In Google’s own words:

We can use this technique to predict almost anything on the
web – tomorrow’s share price movements, sports results or news
events. Plus, using language regression analysis, Google can even
predict the actual wording of blogs and newspaper columns, 24 hours
before they’re written!

The internet community is abuzz with the latest Google gossip.
No, not the prediction that they’ll make their Gmail storage space
unlimited for their upcoming anniversary. And no, not
Cuil, the latest
search engine designed by former Google employees which professes
to kill Google (so far, all they’ve managed to do is crash their
server over and over). The real news is the release of Google Knol, a social
media site that will possibly siphon large amounts of traffic away
from information powerhouse Wikipedia.

Knol is, much like Wikipedia, a place on the Internet to share
information for free in article form. The key difference is that
whereas Wikipedia has articles written and edited by anyone who
visits the site, Knol has articles written by industry
professionals. In the words of Cedric Dupont on the
Google Blog, “Knols are authoritative articles about specific
topics, written by people who know about those subjects.” In other
words, an article about hearts is written by a cardiologist, not
the mass public. Although still in Beta testing, Knol has already
published hundreds of articles from astronomers, doctors, chefs,
professors, and even linguists. Google is even trying to coin the
word knol into Internet vocabulary, defining it as a “unit of
knowledge”.

Another interesting feature of Knol is that authors are allowed,
and indeed encouraged, to claim their writing as their own legal
property. Furthermore, this means these authors can choose to
receive revenue from their content by placing Google ads on the
articles’ landing page. Google writes “If an author chooses to
include ads, Google will provide the author with a revenue share
from the proceeds of those ad placements.” An interesting incentive
for writers, but so far most of the articles have no ads. Possibly
in keeping with the freedom of Wikipedia, most authors might not
want their work tainted by gross auto-generated ads.