No link to the DTD itself so it relies on browsers having stored but prompting the browser to first go into standard mode and then redo the whole thing after the first grammatical error and quirk it. Does the designer even know what it means? It means you tell a browser your site will conform to that document type definition so that it can go into standards mode instead of guessing around some informal piece of tag soup that has no real meaning except some de facto standards that for some part look like each other.

Whoever made this is a charlatan like webdesigners typically are, exploiting that fact that people that order pages usually have no knowledge of it themselves. Naturally they can only care for how it looks in the end and don't know what other things come with it. I do find it most ironic that the designer did add info towards self, seems to at least know how to add metadata, just doesn't care about that band that much as long as getting paid.

//this topic has no real use, I know, but I found it one of the most unusual of opening topics I or any one ever made. Besides, I like to make a bad impression at first to see if people still like me afterwards. <- that sentence is going to be quoted probably and replied to in a praedictable manner.

Actually, I'm just some one that thinks many web designers rip their clients off because obviously if the clients were knowledgeable, they could have done it themselves? They, naturally, just think 'oohhh, nice looking, it's good.' and of course don't realize that good coding has so much more things to it like:

- security (god I suck at this, well, I'm decent but no reet hax0r and thus no pro on security)
- optimal code that loads fast, the layout of the site takes ages to load, the same layout could load a lot faster were the code optimalized and didn't contain grammatical errors the browser has to 'repair' and thus take time and resources of the visitor's CPU. ( click, lots of ingrammatical things there and the code looks like its piece together by Dreamweaver or some thing, which of course delivers hardly as good a code as a human coder can. Also, ingrammatical code is just not 'neat' / 'elegant')
- semantics, a layout is nice and all for a human reader but it's still not going to to explain to a search crawler what the site is about. That the centre thing is a menu is quite obvious to a human, not to a crawler or some other machine so you have to telll them in some way.

the w3c semantics extractor is a tool that provides you with the basic semantics of sites, observe the difference:

Of course, the layout of the last site is a lot simpler but that shouldn't stand in the way of semantics, and this is just the outline of the site sans content, this is a script programmed in XSLT, a very basic transformation language, the semantics there could be client-side computed by most browsers, imagine what a google spider should find, so better give them a hand.

Also, flash is nice looking and all, but since it's closed source, crawlers can't access the info on your site ultimately making it a big mistery to google on what queries your site should be returned. So to help them for flash sites you really have to fill in the meta-data or provide a transcript for google bots in text that's not visible to human readers. Adobe of course doesn't want people to know this, some web-designers don't even know this. That's why I use DHTML and not flash where possible.

I'm not going to make a pixel per pixel match, you probably all knew I could if I tried. I'm going to make a layout in the sense of that you wouldn't notice that any thing was wrong if this was the first layout.

I scanned through the code once more for making this. This site is GARBAGE on the coding level created by a charlatan, the CSS is ridiculously redundant and probably created by dreamweaver or some other crap. This line:

font-family: Arial, Helvetica, sans-serif;[code] appears no less than 50 frigging times, each time declared again while CSS inhaerits the properties of its parent elemen-tnode. One would have been enough....
The rest of the code is also garbage, GARBAGE. Just take a look at my source code and that at the site and notice the difference, so much more elegant and simple, no redudancy. Nothing. No double images loaded that just load over each other and produce the same content. IT'S GARBAGE.
Observe the semantics too: http://www.w3.org/2005/08/online_xslt/xslt?xmlfile=http://cgi.w3.org/cgi-bin/tidy-if%3FdocAddr%3Dhttp%253A%252F%252Fnihilarchitect.net%252Fqrvnl%252Fblind%252520guardian.html&xslfile=http://www.w3.org/2002/08/extract-semantic.xsl
If code is poetry, than that crap up there is made by Martial.
Oh. don't try it on IE yet, of course...

w3c validator and semantics extractor are ultimately only very limited tools to determine a site and can actually fail. It's possible that a site does not meet xhtml 1.0 but the validator says yes because xml doctypes as a language are not capable of stating that attributes may have a maximum number of characters. And some attributes of xhtml 1.0 by the formal definition of the language can only have one character-long values. The only way to really check code is still by hand.

You never really finish learning this I suppose, if I look at sites I made some time back I know I would have done it differently now, and likely too in the future.

Which is pretty funny considering how the site looks, a good trick I employ is making all kinds of headers but using display:none; to render them invisible. I did that too with that blind guardian site. A site should be intelligible to machines first and humans second is my philosophy.

In A.D 2101
Site is beginning
What happen?
Somebody give up us the body!
We set content!
What?
Paragraph turn open!
It is text
'how are you gentlemen, all your base are belong to us'
paragraph on destruction
What do next?
You have no other content to place, make your end, ha-ha

The web is coded like complete crap because browsers originally started to accept crap code, then it got worse and worse because people used it and a browser that only accepts valid code is doomed to fail because of the nature of the web.

A simple step towards valid code is using HTML tidy. An engine that just 'validizes' your code but still leaves it undesirable.

The site may be grammatical, but a good story it is still not. Still, nice effort though, a lot better than most code.

By the by, the person with the unpronounceable, Finnish looking name is right. I also mostly agree with the article you linked to. It kind of reminded me of a paper on the semantics of video games I wrote.

http://www.gedichtblog.deThey say that there's a broken light for every heart on Broadway.
They say that life's a game, then they take the board away.
They give you masks and costumes and an outline of the story
Then leave you all to improvise their vicious cabaret...

is right. I also mostly agree with the article you linked to. It kind of reminded me of a paper on the semantics of video games I wrote.

Ah, link it.

Also, some points of critique yet? As the top banner says, the pages are not finished yet, contain errors, typing mistakes, my ever recurring 'little people know that...' (people is a mass noun in my head).

I do proofreading in my job, I didn't read your article for proofreading but to know what you think afterwards. In other words: sorry, didn't pay attention to that.

I'll see about putting that paper online when I find the time to do final proofreading of it. The main reason, why your article made me think of it, is because you talk about machine operations and how they are represented or translated into human understandable form and how the machine text is different from what the reader/player sees and gets to interact with. There have been people who have mistaken the representation - in my case, they game graphics you see on the screen - for the real thing - the game, which exists mostly as machine states, which in turn a realizations of abstract game states. I suppose when one considers online multiplayer games your point about machines communicating far more and in some important way different information than ever reaches the human being at the screen would be something valuable to keep in mind. But of course, I'm not really interested in giving good advice to programmers or web designers, but in the semantics of talking about video games.
Come to think of it, I should try and expand my ideas to cover web browsing, too. The principles should be similar.

http://www.gedichtblog.deThey say that there's a broken light for every heart on Broadway.
They say that life's a game, then they take the board away.
They give you masks and costumes and an outline of the story
Then leave you all to improvise their vicious cabaret...

t.a.j. wrote:I do proofreading in my job, I didn't read your article for proofreading but to know what you think afterwards. In other words: sorry, didn't pay attention to that.

Ahh, I thought you agreed in the sense of the facts claimed in it.

t.a.j. wrote:I'll see about putting that paper online when I find the time to do final proofreading of it. The main reason, why your article made me think of it, is because you talk about machine operations and how they are represented or translated into human understandable form and how the machine text is different from what the reader/player sees and gets to interact with. There have been people who have mistaken the representation - in my case, they game graphics you see on the screen - for the real thing - the game, which exists mostly as machine states, which in turn a realizations of abstract game states. I suppose when one considers online multiplayer games your point about machines communicating far more and in some important way different information than ever reaches the human being at the screen would be something valuable to keep in mind. But of course, I'm not really interested in giving good advice to programmers or web designers, but in the semantics of talking about video games.
Come to think of it, I should try and expand my ideas to cover web browsing, too. The principles should be similar.

I'm personally for making the semantics more as a human language, not in terms of grammar but in open classs. Now HTML is

<p>This is a paragraph, let's have poop sex in it and then watch some
<abbr title="child porn">c</abbr>, that would be nice.</p>
<p>ah, this is another paragraph, let's define <dfn>poop sex</dfn> here shall we, it's simply the best thing out there. Check it out <a rel="external bookmark" title="poop sex" href="http://blind-guardian.com">here</a></p>

Instead, I would like to see all that replaced by a general element 'text' that makes it like this:

Et cetera et cetera. All the values for all attributes are completely open to chose but there are of course some conventions that people can abide to. The look of the site itself is chiefly the domain of the CSS and for instance the w3c or google can publish their own doctype recommendations on how to use the attributes and which values google looks for. A href on any element changes it into a link a graphical agent can click on to go there of course. Together with 'src' attributes for images and maybe some other elements for some things that aren't text.

And in the end, it should provide for a means to publish semantics transformation schemata which can, lossless or not, transform one base of semantics into another.

http://www.gedichtblog.deThey say that there's a broken light for every heart on Broadway.
They say that life's a game, then they take the board away.
They give you masks and costumes and an outline of the story
Then leave you all to improvise their vicious cabaret...

//player003.marine007.life() == 4
player003.marine007.hit(5); //marine007.life() == -1
//player003.marine007 thus dies and is removed all the effects that that has are executed.

Which is translated visually to a human as that it splatters open in blood and screams and then dies because that would make playing a lot more intuitive to a human then just sending and receiving code over the command line.