Like this:

Related

2 Responses to "The semantic web vs the bionic web"

The initial idea was that THIS was supposed to be the semantic web. Everything was supposed to be interactive, and data was supposed to be tagged. For better or for worse, the least common denominator solution took hold: web browsers became capable of only pulling down content, not of uploading and editing content, and nobody bothered to tag their data (not to mention all the standards were not yet fully developed…)

The second method is to have computers automatically go through and tag our data. Face and image recognition for images, artificial intelligence (or just REALLY good algorithms!) for text. Google is taking this approach, the new Google Recipe Search (not its official name) is an example.

The third method is to have users tag the data by hand. That is the method that url.com is taking more or less. Facebook.com also uses this method, when a user uploads a picture, they identify who is in the picture and draws boxes around the different people’s faces. This method has relatively good accuracy, but one person with a lot of free time can mess things up (which is true of all the methods actually).

All of this is made harder by the spammers who pollute the web with junk. Google spends a ton of their efforts just trying to combat spam. A colossal waste of brain power brought on by the greed of idiots.

Spam is what you get when you design a system that works the way the our present Internet works. It’s trivial to get rid of — really trivial, in fact. Don’t want to receive spam e-mails? Design your mail system so that it costs time/money to send it. Don’t want spam in your search results? Only index sites that pay to be listed.

Economic solutions such as these, however, exact their own cost — and in the case of search engines, we would expect our “pay for inclusion” Google to be far less comprehensive than the Google we know and love.

It’s a free Internet — and people (meaning everyone, spammers included) should be free to create whatever web sites they wish, I’m sure you will agree. Google knows their options — they’ve got a few sharp folks, I’m sure — and has decided to exercise their ability to freely (and programatically) decide what to index rather than try to solve the spam problem through other means. This isn’t a waste of brain power any more than having two companies building the same product is — it’s a purely economic, rational decision, and part of the costs of doing business that one factors in when making a decision to build a search engine or not. One can even make an argument that most likely, if it wasn’t for spam, Google wouldn’t have hired the engineers working to combat it in the first place (in the same way they haven’t hired heavy equipment operators) — they simply wouldn’t be needed. Maybe Google’s market cap would be infintesimally greater as a result, but at this point, really, does it matter?

And lastly, of course, what’s wrong with greed? Or idiots? Or greedy idiots? 😉