Since it’s Thanksgiving Day and in my culture that means watching The Wizard of Oz, I thought I’d reference the wizard (the little man behind the curtain) while making a note of Google’s latest “innovation” — the breadcrumb URL replacement. Aaron over at SEOBook (a great source of free SEO tools by the way) notes that Google has strayed from its focus on relevance, and replaced user-friendly URLs with not-so-meaningful site breadcrumbs.

I like that Mat Cutts responded by saying he’d pass along the word… suggesting that this innovation was not a quality factor, and perhaps had not received much internal debate at Google w/respect to the relevance argument. I think that makes sense, because I think this sort of Google innovation is anti-competitive, and part of a long term internal strategy. I told Aaron it was just another step towards eliminating the URL. I have long believed Google wants to eliminate domains and URLs for lots of reasons.

But right now on the eve of Thanksgiving, I want to remind everyone of the Great and Powerful Wizard that was actually a little man behind a curtain.

The breadcrumbs are the Wizardry..what Google wants everyone to watch. Something new! Another Google innovation! Look, helpful breadcrumbs! Naturally it is the professional SEO that remarks first how this innovation does not enhance the SERP, because properly optimized sites have helpful URLs. In the example Aaron provided, Google has replaced a very helpful, meaningful and rich URL with an much less relevant breadcrumb. But Google wants everyone to watch how those breadcrumbs help average web sites, because Google doesn’t want you to look behind the curtain.

What’s behind the curtain? Google hates those who make a market around Google. They hate the companies that make tools that mine Google’s data. They hate the optimizers who scrape Google and they hate the rank checking tools that charge money to report on Google status and ranking. They hate domains that garner direct navigation traffic, because users can find shoes at shoes.com without asking Google where to buy shoes (and revealing that they are a consumer, tracked by numerous Google-owned marketing cookies, now poised to execute a commercial transaction).

This URL removal is an anti-competitive practice that seeks to hinder the efforts of companies that re-sell Google’s data, whether they be SEO research services or re-purposers (scrapers). It is the unique URL (and unique domain name) that enables everyone else to make money on the web. As long as every consumer has to go through Google to find a web page, Google has a chance to take a piece of the profits.

The urgency however comes from the competition. As long as Google spends its efforts creating relevant collections of URLs and publishing them to the web for free, others will mine that resource and re-sell it into niche marketplaces for profits. And that harms Google in the long run (or so the thinking goes… you do need to think it through though).

Think about the businesses that have storeed Google results sets for years, and now offer research services. The businesses that offer SEO services, to help web sites rank higher than Google naturally ranks them. The reporting services… even the “let us manage your Local Business Center account” agencies are parasitic to Google’s business model of one website, once business owner, one Google customer.

This year Google renovated its SERPs to use 302 redirects through a Google redirector, instantly breaking many third party tools and disrupting all sorts of user-centric research tools. AJAX results sets also debuted, redefining the Google SERP as dynamic in a whole new way. And now, in this test, the URL is replaced in many of the results, swapped for breadcrumbs. It could have been anything, not just breadcrumbs. Anything but a direct URL, or a URL which could be parsed out.

Personally I like to see big successful companies that exert monopoloy-like power over public markets start putting their energy into defensive tactics. It’s energy not put into real innovation. It takes a wee bit of pressure off the upstarts that hope to some day challenge the monopoly. Like my lacrosse coach used to scream at me every day, whenever you’re relaxing (not working out), your competitor is getting bigger.

Today Google announced Google “Closure”, a set of tools for efficiently working with javascript. With Google Closure Google has theoretically “closed the loop” on a number of javascript problems. Not problems developers have with javascript, but problems Google has with javascript. The problems haven’t been solved, mind you,because we don’t all use Google Closure for everything yet, but conceptually, if we did, Google would have a much easier time as web overlord.

I think it’s important to change perspective for a moment, from the Google PR and developer world perspective, to the competitive SEO perspective. I’m not saying any of this is true fact; I’m merely implying intent the same way Googlers often imply intent when looking at webmaster activity. I’ve seen Google employees look at a web site that appeared to be clean, and mark it untrustworthy simply because the webmaster appeared to associated with other sites that were not as clean. That’s the world Google forces us to live in, so Google should live in the same world. If Google expects us to be slimy, it’s a pretty safe bet that Google behaves that way as well.

So what’s this Closure stuff? Google released Closure as four related tools for working with javascript. First, a js compiler, which “compiles web apps down into compact, high-performance JavaScript code“. There is a Firefox tool, which helps you see into compiled code for debugging, since without that no developer would compile anything that wasn’t considered 100% finished. There is a “well-tested, modular, and cross-browser JavaScript library” called Closure Library, and finally closure templates, which make working with the Closure Library easier if you are totally committed to The Google Way and comfortable building js apps on someone else’s framework.

So what’s the SEO perspective? Well, you should go back and re-consider why Google may have started hosting the most popular javascript libraries on the Google content distribution network last summer. I raised the issue then but didn’t highlight specific reasons why I was giving it so much attention. Google has long distrusted javascript. Since Google can’t actually crawl and interpret all of the javascript that may be modifying published web content on the Internet, js provides clever webmasters with a means of resisting the Borg. But if we all pulled our standard jQuery and MooTools and prototype js libraries off of Google’s CDN, Google could “trust” our sites more than sites which hosted their own js libraries.There wouldn’t be any “funny business” if the libraries were known to be clean.

With Closure, Google is able to go a step further *if* we all adopt it. Javascript submitted to Closure for compilation could be “indexed” and assigned an id code on the web, so that from that point onward Google would be able to recognize (and trust?) that compiled code. Any change would necessitate a recompile (or, in other words, re-registering your javascript code with Google). Given Google’s development of the Chrome browser, Google could also offer additional incentives for code registration — it could run faster in Chrome. Or it could be pulled from the CDN like jQuery, and your project might benefit from a kick-start if you use the Closure Library and Closure Templates.

Again… I’m not saying this is Google’s intent with Google Closure. I am saying that if I were Google, I would certainly explore this as an opportunity to advance my control the web without stifling innovation as much as I would otherwise have to… such as Google has been doing lately.

With Closure, Google closes the javascript loop, with what is basically registration of js code with the Borg. Like it or not, whether Google does it today or not, it is a viable option for Google, and certainly easier than trying to license white hat SEOs.

.

.Related questions that Google should probably answer if it wants support from developers:

is this intended to compete with jQuery? Compliment? Or are devs expcted to pack jQuery through Closure, too?

why built another js minifier? Dean Edward‘s Packer works very well… even better than Closure according to early reports.

what about Google Web Toolkit js library? What’s the roadmap here.. or is there a js roadmap at all?

A few years ago I was involved in some IT grants from the US government. I was in awe… of the incredible corruption I witnessed. It wasn’t “China style” cash bribes, but rather relationship corruption. Political stuff, where knowing someone got you access, and keeping a strong relationship (by whatever means available) got you continued success. I just called it corruption… I’ll explain why.

At one point, shortly before I left that world out of disgust, I reviewed a $600,000 contract renewal for maintenance of a small database that almost no one used (relatively speaking). Not a large, complicated Oracle database. Not a secure, sensitive database. A simple information database (in a 3 or 4G database language) that had simply gone too long without smart management oversight. No one wanted to touch it. Even the IT guy who built it and maintained it didn’t like working with it, but he apparently didn’t have other job prospects that paid this well. The project had no real career-building value. It was not part of any project that would succeed, nor did it enjoy a high profile. It was simply there, and no one wanted to be the person who decided to stop funding it. The renewal proposal was about the same as it was at last renewal, plus a little more, and came to around $600,000.

That’s nearly a million dollars of your tax money, to fund a seriously second rate (I checked) self-taught IT guy working on something like one single Microsoft Access database, which was used successfully by probably a hundred people each year (mostly because those hundred didn’t otherwise know how to find the data in one place). As a taxpayer, I bet you didn’t know you paid for that.

As I thought about asking specifics about why this was nearly $600,000 and whether it was needed or not, another project came across my desk with a higher priority. It was also a database — this one written in scripts for an IBM AS/400 mainframe system. The database was noted to be essential. It contained vendor contact information, going back nearly 20 years. It was large and not a real database, but a set of scripts. The mainframe was being retired, and the proposal was to either re code the information into a new, modern “database” or fund the maintenance of a dedicated legacy AS/400. The recoding project was estimated at tens of thousands of dollars to get started with a requirements review, with no certainty of the actual total costs. The legacy mainframe was budgeted in the $150k range, plus annual maintenance overhead. Not a ton of money, but not insignificant.

As an IT guy I knew the only correct answer was a re coding, and that a re coding should only be considered after a careful review of the data and it’s value. Over a few months time I successfully navigated the politics and gained access to the “essential data” (in other words, I kept my job while the mainframe guy was eventually forced to retire). I loaded the data into Excel and examined it. Of the hundreds and hundreds of vendor contacts, only 11 were current. Eleven.

In short, much to do about nothing. And that process took about 4 months, plus 20 minutes for me to use Excel.

I was told that the $600,000 contract went through, and the project would be re-examined at some future date. I subsequently learned that databases (at that time… late 1990’s) were the Sacred Cows within government agencies. They were difficult to control, acknowledged as valuable, and sensitive — databases could be “corrupted”, could be “tainted”, could be “infiltrated” or “ms-appropriated”. All great scary important government words, which meant dollars could be safely assigned to databases, with little credible challenge. Databases were technology, and technology was sexy. Databases were large (or could be easily made to be large), which meant they provided a basis for justifying new, faster computers every year. Database administrators in the real world commanded large salaries, so self-taught pseudo DBA’s working for the government could get a decent fraction of a high salary by association. Database administration was also dynamic, which meant training budgets could be justified.

I honestly believe that I would have been able to show that the $600,000 database was almost as equally useless as the 11 vendor database, had I been given a chance. But of course I wasn’t given that chance. I was given a grant of my own instead.

In 2009 we enjoy the ramp up of the age of the government web site. We’ve already seen one web site project approved for over $18 million dollars… and it’s a web site to tell the taxpaying American public specifically how the government is spending our tax money.

We’ve seen several independent consumer-facing web sites launched by the government, each with a unique style, on unique technology platforms, published by different agencies. I can only assume each of these has a maintenance contract as well. And is counting “hits” to justify renewal in the next round of funding. I can only expect that pseudo “branding experts” are preparing the language that will be used to justify intangible asset value as well, a new Sacred Cow for a new age. I don’t recall the Federal Register ever having to package itself as a consumer-friendly magazine, but apparently our new government in Washington thinks government-funded webmasters are the solution to satisfying the public’s need for accountability. What a scam.

And the latest scam is this joke of a web site from the Federal Trade Commission (FTC), apparently intended to help consumers understand that credit reports that cost $14.95 per month are not actually free. You paid for that web site, and you’ll pay for the maintenance. You’ll pay for a junior web specialist to get Dreamweaver training, you’ll pay for her associate to take an “intro to marketing” course, and you’ll pay for her supervisor to get “how to manage technical creatives” training. Or you’ll pay a web company a few hundred thousand dollars to do it all for you (with a maintenance contract going out a few years). All for the very important purpose of…. what exactly?

Exactly. To translate caveat emptor into modern American English, on a web page that no one will read. Unless it ranks at the top of Google. Which it won’t do unless Google forces it there, since it is so poorly crafted. And even in the #1 spot, would it convert? Look for the call to action. Can’t find it.. wait.. no, I thought that was it but no… oh okay I see it… um, yeah that’s probably it. I’ll have to try before I know for sure. It clicks thru to yet another government website (ftccomplaintassistant.gov). Now where’s the “submit a complaint” call to action? Hmm… let me try and find it.

I’d be surprised if the entire process enjoys a goal success rate of 3%.
And if you think I’m exaggerating, go to the site and follow thru to file a complaint. I decided to file mine against the FTC, for misrepresenting themselves as a non-profit entity protecting the American consumer. I was going to focus my complaint on the concept of personal inurement… the use of a non-profit entity to enrich the lives of those operating it, such as through good paying jobs and job perks. I know it doesn’t apply to government web sites, but I wanted to do it anyway so the complaint would sit for years in someone’s “how do we count this one” pile.

I didn’t get far. The web site’s “file a complaint” form forces virtually all of the complaint fulfillment process back on you, the submitter, via a process filled with pick lists and forms to properly classify and categorize your complaint. Almost everything I wanted to pick was not classified, and required I choose “other”. Even the “credit reporting agencies” or “credit reports” issue was not listed as a popular topic. I bet the drop out rate for that feedback form is in the high 80% range, which would be astonishing for a site catering to already pissed off complainers.

But the FTC’s management doesn’t care about that metric. They care about the ones I was asked to grade via a “user feedback form” commissioned through very much for-profit vendor Forsee Results, which sent me a “random feedback” survey. They wanted to know exactly how satisfied I was with things like the FTC complaint form’s “visual appeal”, “balance of graphics and text”, and “number of clicks it takes”.

Exactly. More spending to justify more spending. Or, in other words, we’re stock piling expensive hay to keep feeding the new sacred cows we outsiders call “web sites”.

I’ve been a competitive web publisher (and SEO consultant) for many years, and I’ve been participating in domain development for the past few years, working with domain portfolios and people generally classified as domain investors or “domainers“. Lately we’re seeing news articles about scams and rip-offs, and some of those are on big premium domains known to have been developed by domainers (with development partners, of course). Most claims of “scammyness” focus on the monetization angles pursued by the sites.

Question: Are all domainers scammers? No, not all of them.

Proper domain development is an expensive and detailed process. The most important aspect of successful domain development is web marketing strategy, or publishing strategy – the “why” that should be driving the development process. For those of us experienced in search optimization (SEO), this is the core fundamental aspect of our work. Without a strong set of publishing goals and an associated web strategy, any optimization efforts will succeed only at the whim of search engines. When they are sloppy, and when they leave profits on the table, you can take them. But when they pay attention, you get very little. And when search engines focus attention on actually taking the profits out of your market, you get nothing.

Google has been doing this in more and more markets lately. Any SEO who didn’t pursue a sound publishing strategy a year or more ago is feeling the heat of poor performance right now. How they respond to that heat probably reveals a lot about how they approach domain development in general.

Many domainers choose only to develop when they find a development partner willing to go after fast money opportunities, which promise a lot of money for little work, risk or investment. Absent that, they are willing to wait. That process acts as a filter, eliminating most opportunities and creating opportunity for scams.

You take your own look at the “free credit reports” marketplace. Does this web site look legitimate? Does it look like a safe and wise choice for getting your government-mandated free credit report? What about this web site. Here’s a hint — the ugly one, with poor optimization, poor user interface and very little character, is the official and safe one the FTC expects you to pick. The others? The FTC says they are scams… because they actually sign you up for automated monthly rebilling for various kinds of credit monitoring services. Check out the left side of that site, and the full paragraph of information that starts with “Important Information” and says it is not the official free site, does charge a fee, and even links out to the ugly site. Apparently that’s not enough for the FTC (PDF) or at least one congressman.
Scammers exploit opportunity as fast as possible, as aggressively as possible, without regard to consequences, which are often viewed as someone else’s problem (SEP). Standard Operating Procedure (SOP) is make money as fast as possible, SEP is what’s left behind. Sometimes, the investors inherit the problems. Sometimes the economy does. Usually we are all left with more cautious, more conservative, more heavily regulated environments, while the scammers move on to the next opportunity for exploitation.

Contrary to scammers, more traditional businesses seek to secure a mind share position within a marketplace, maneuver into a position of control and influence, and then exert that influence in ways which manage the marketplace, keeping it profitable (for them) while erecting barriers to entry for competitors. SOP for them is a long term play, even when fueled by revenues gained from fast acting, short term exploitation of transient opportunities (such as those that may exist after innovation and disruption, when such companies build their “war chests”).While scammers take the money and run, real businesses take the money and secure dominant positions in the marketplace.

Strategic SEO/web development is based on sound strategy. The FTC and the entity it designated to set up that ugly, not-very-trustworthy-looking free credit report website had no such web strategy. And it shows.

You’ll find a large number of free credit report websites monetizing on those subtle rebilling programs the FTC despises, and the most successful ones are on premium domains like FreeCreditReport.com, AnnualCredit Report, etc. Premium domains. Are they owned and operated by domainers? Wholly? Partly?

The domain investment industry grew out of nowhere to very high value over the years that the web grew from an idea to the central commerce and information network it is today. A portion of the domainer community succeeded by stepping into the market, taking risk, making wise moves and/or getting lucky. A portion stepped in and worked hard and/or smartly, again taking risk and investing. And a portion elbowed their way in by breaking rules and conventions, taking advantage of others, and exploiting the commons. We are all free to assign character traits to individuals as we might like, but this is not unlike other industries such as banking, railroads, IT or even SEO.

In the late 1980’s and early 1990’s when domain registrations were free and most generic dot com domains were unregistered, at a time when it was understood that the Internet was non-profit and domains were for companies or individuals (one domain per entity), a Unix system administrator at a University may have registered dozens of names for himself anyway (perhaps working on company time, which may have been funded by grants from the US government). An administrator somewhere else may have reserved names in the system as if they were requested by others, only to take them back for himself years later when they were worth millions. There are many such success stories. It’s not too different from the way “robber barons” operated during the industrial revolution. But there are many others who earned their stripes in more honorable ways as well. in short, it’s business, American style.

So now some domainers are looking to develop their domains into revenue generating businesses, by working with development partners. Some are selling their domains to others, hoping for big prices from those looking to generate revenues on those domains. Along the way, business people driving development are choosing the highest profit opportunities, which often involve consumer scams. When that happens, who are the scammers?

If you enter “free credit report” into Google, what comes up? It’s always more than one site. Anything else would be un-American.

I think it’s pretty obvious to everyone who takes a closer look. No matter what necessary illusions get published in the mean time, those knowingly ripping off others assume the responsibility for the fraud. The rest are doing business.. meeting market needs, creating opportunities.

Not all SEOs are scammers. Not all domainers are scammers. You don’t need to cheat and steal to make money on the Internet. And your government doesn’t actually have to do a good job with your tax money, does it?

Mandriva Linux released Mandriva One 2010 (see free Manriva Linux OS download here), an upgrade of Mandriva One Spring 2009). The Mandriva upgrade process is not very difficult, but there are specific steps to follow. it’s not as hard as a kernel update (for that, see here). Also be sure to backup your existing Mandriva linux files, for safety.

As more resources for Mandriva upgrade (2009 to 2010) come out I’ll link to them. In the mean time, a discussion of the Mandrake 2010 Upgrade process is ongoing here. A discussion of the transition from Mandrake brand of Linux to Mandriva (since the assumption of Mandrake by Mandriva the corporation, formerly known as MandrakeSoft) is available on Wikipedia.Search engines are returning this page for Mandriva upgrade, but it’s not really relevant.

As usual San Diego’s favorite Chiropractor Dr. Klein is making it happen at the Las Vegas Pubcon Internet entrepreneurs conference. See the sponsors? Azoogle network, Train Signal (a software training company), We Build Pages… and lots more, all supporting a quality charity event.

And speaking of quality charities, I take this opportunity to highlight the amazing Captain Ozone. A project of Environmental Media Northwest, a group focused on introducing young school kids to the power of the media as a means of improving the world (environmentally speaking). Maybe Captain Ozone needs bigger speedos, but the mission seems to be honorable, and I totally support the idea of teaching kids the power of public communications as early as possible.