We can’t change history, but we can change the future.
Be nice to each other.
@robertnyman

Posts in the "" Category

In these times of information overload, RSS
is the only alternative for me to stay on top of things. It gives me
the opportunity to read information from about twice as many places as
navigating to the web sites.The possibility to just skim through
headlines and short descriptions really helps me find what I really
should read, as opposed to wasting time seeing banners, trying to
understand 30 web sites navigation structure etc.

So what does
RSS stand for? The mostly used definition is Really Simple Syndication
(which is also the definition for it in RSS 2.x), but for the format
RSS 0.91 it stands for Rich Site Summary and for RSS 1.0 it stands for
RDF Site Summary.

First of all, most (if not all) blogs have some kind of RSS feed (as does mine),
to make it easy for returning readers to see if anything has been
updated or if today’s/this week’s topic seems interesting enough to
read, and so on.But nowadays most news sites and other web sites
use it as well, because it offers a good way to reach out to more
visitors and also giving them another option.

Personally, I use Sage,
which is an extension to Firefox, to able to, in an easy and fast
manner, go through the updates in the RSS feeds I follow. Just press
alt+s and you get it as a sidebar, without interrupting your general browsing.

So how do I know if a web site has a RSS feed? Most of the web sites offering RSS feeds have an icon, similar to this: , and in Firefox you get an orange icon in the bottom right corner, indicating that a feed is available.

As
a developer, you can also make the feed available by inserting a link
tag in your first page/all of your pages, like this example for the
Sage project site:

In my previous job I worked for a company that have offshore development, mostly for bulk programming purposes to keep the costs down. And not in any of those more common offshore places like India or Russia. No, their offshore development is in Belgrade, Serbia.

All the developers I’ve met/spoken to in the Belgrade office are very nice, but in my opinion it hasn’t really worked out yet for them in their collaboration (due to a number of reasons that I won’t go into here).

However, I read an interesting post which claimed that Internet Explorer has become the new Netscape 4 for us developers (original post found here).
I think that statement is a bit too harsh, but Internet Explorer
absolutely poses the biggest challenge in every day developing.

The thing that bothers me, though, is when pro-IE developers say things like: “So what’s so special about Firefox/Safari/[insert name of more competent browser here]? What does it have that IE doesn’t?”
It is that kind of attitude that scares me, that people are able to get
along in their professional life as developers without even knowing
about all the things Internet Explorer is missing, the far superior CSS
support (amongst other things) in other browsers and so on.

Well, I advise those people to take a look at Eric Meyer‘s css/edge site (of course, the examples here need that you use a competent web browser).

One of the things I miss the most in Internet Explorer’s CSS support is attribute selectors.How about adding a look for all input text fields, without using classes,
and without affecting radio buttons, checkboxes, submit buttons etc.Impossible, you say! Nope. Just code it like this:

input[type=”text”]{
width:300px;
background:#F00;
}

Ah, amazing, isn’t it? Eric Meyer wrote an article in three pieces about this in August 2000, part 1, part 2 and part 3. I recommend you to read these, especially the “box of possibilities”, as I’d like to call it, at the end of part 3.

Basically, what it does if you have it installed (it can only be installed in Internet Explorer on a pc), is that it automatically turns ISBN numbers in a page into links to Amazon, addresses into links to Google Maps, Car license plate numbers into links to Carfax and Package tracking numbers into links to UPS.Like the example in the editor’s column, if you navigate to Barnes & Noble, ISBN numbers in their pages were turned into links to Amazon (they have now implemented a fix so this doesn’t happen anymore).

I think this is a horrific behavior, alternating the content of different web sites, deciding like a God what, for instance, information about books should link to.

And, of course, people are already implementing fixes as well for this. And if it stays this way, many sites will have to implement fixes if they don’t partner up with Google…

I’m also worried about the implications of this. If this keeps up, I’m worried that this feature will indeed be installed in upcoming versions of web browsers, and in the future the user will never be sure where a link might lead, if it’s an intentional link by the web site, or if it
is added on by your web browser/toolbar etc.

PS. Normally, I try to write daily Monday to Friday, but with Easter coming up I’ll write next post on Tuesday March 29th. For you developers reading this, if you, as opposed to me, have some spare time during the weekend, I recommend reading Roger’s excellent piece Developing With Web Standards. DS.

If you don’t buy a license you get a built-in ad space that is constantly present. Given how sick and tired people are of banners, pop-up windows etc, I understand why they don’t want to install a program with built-in commercial that is always present. Especially not when there are such a number of good web browsers available, that are free and commercial free.

Version handling

They release a lot of minor versions all the time. If something doesn’t work in your current version, you can easily upgrade to version 7.54 where the issue has been resolved. Sure, great, but this is a thing that just kills a developer’s interest in it. For a web browser that perhaps has got 1% of the web browser market, it’s impossible to motivate testing in 20 different versions just to see if it works in version 7 that is in the market right now.

Rendering bugs

Maybe this is just me, but the versions I’ve tested have been working just fine, except for some renderings bugs I’ve experienced when it comes to background colors rendered over a too big area etc.

The interface

A totally personal opninion, but I don’t think it looks good.

What are they doing right then? These two things are really attractive to me:

In an upcoming version they’ve added support for SVG (Scalable Vector Graphics). Incredibly good move, and the day all major web browser supports natively, web user interfaces will be a new world. Something that is also happening in Mozilla. About time for Internet Explorer too?

PS. Visited the Opera web site, where they’ve added the ENORMOUSLY annoying keybord shortcut alt+D that automatically sends you to the download page. For me. it’s the keyboard shortcut I always use to move focus to the address bar. DS.

PPS. I live for the hope of daylight-saving time now. My little daughter wakes up at 5 in the morning, and I really hope that daylight-saving time will nudge her time of waking up one hour forward. DS.

My wondering is what kind of buttons are the most suitable, the built-in system buttons, to create your own images or to use links with javaScript calls?
Generally, I don’t like links with buttion functionality, like submitting a form, so to me it isn’t an option.

When it comes to the two other alternatives, there aren’t any easy answer to it. It can be a design issue, when creating your own images can everything look so much better, you know they will look the same in ll web browsers etc.

However, something I find confusing is when designers (usually Mac users themselves) create a design where the buttons just look like the buttons in Mac OS X, which there is nothing wrong with, but it feels weird when you have a design where you have to use images for buttons when they’re just a rip-off of built-in system buttons from another operating system.
If one is going to use images, I ask for more creativity, please.

But when it comes to the recognization factor, I lean towards using the built-in system buttons, since those buttons look like they do in most web pages for the user, and they look like the buttons the user is used to so he/she doesn’t have any problems finding what button to click.

At least in my opinion… The seminar I went to yesterday was mainly about Visual Studio 2005 and .NET Framework 2.0
that should officially be released this fall. Through the years I’ve
gone to a number of seminars, and I’ve always thought that Microsoft are very professional when it comes to holding seminars.

Like yesterday:A nice venue (SF Skandia
in Drottninggatan in Stockholm), good snack options in the breaks, nice
and not too pushy Microsoft partners standing outside the hall with a
considerable amount of candy…

But also because the speakers were professional, especially Johan Lindfors who I find to be a good
and enthusiastic speaker without being to perky and colored by his company.When it comes to speakers, I prefer that they come across as an individual, and not just
like a puppet preaching blindly about their company, and yesterday I got a positive experience when it come to this.

Unfortunately the seminar was too little about web development for me, but I guess I have myself to blame not reading the agenda carefully enough to see that ASP.NET 2.0 wasn’t mentioned. Apparently, they’ve had seminars about that prior to
this one, but I’ve been to busy with other things to notice those. So now, when I had the time, it wasn’t a 100% right.

Anyway, it was interesting to hear about the news in the developing environment, the possibility for different roles in a project to perform their tasks through Visual Studio Team System, especially when it comes to testers who seem to get a very good environment.I was also happy to see that SQL Server 2005, among other things, offers a lean way to format XML through PATH (previously, it has, for me, mostly been about getting XML as XML RAW and then formating it with XSLT. An unnecessary step, so I welcome the new formating options).

Overall, as a developer, I have to say that I’m impressed by Microsoft’s .NET venture. I really think it’s a step in the right direction and the simplifying and possibilities that are offered to developers are very very good.Sure, ASP.NET doesn’t generate perfect code, for instance you can’t generate valid XHTML and it bothers me with their BrowserCaps that has the notion that Internet Explorer is the most superior web
browser and that all others should be served HTML 3.2 and can’t handle
any on-thy-fly changes.

But it seems they’re going to solve this with ASP.NET 2.0, at least when it comes to theformating of the generated code from WebControls.

The only area where they appear to think that they’re above standards is with their Internet Explorer and its shortcomings. My big hope is that they will really face this problem with IE 7, and then maybe there’s hope!

Personally,
I’m of the opinion that frames should never be used (iframes is a
totally different question, they’re just part of a “normal” page).
There’s a number of reasons why you shouldn’t use frames:

A couple of examples for the developer:

Difficult to keep the different pages synchronized, especially when it comes to manual reloading by the user

Hard to push out content. e.g. when a navigation in a frame has neen updated

Search engines can find single pages and then link to them out of their context

A couple of examples for the user:

Impossible to create a bookmark for a certain page

Not possible to save a link that, for instance, goes directly to a product page

When
it comes to the technical aspect, there exists such good possibilities
to cache prats of the page for reloading purposes, so server load
shouldn’t be a factor to use frames.

Another
argument people use to have pro frames is that the menu frame is always
ther (for instance, to the left) and doesn’t “blink”, but this is more
of a browser thing than the technical solution. If one, fonr example,
uses a Gecko-based web browser (such as Firefox)
it chooses to get the page one has navigated to, while keeping the
current page visible until the next page is fully loaded, so one
doesn’t experience a white in-between page or a jump, as opposed to how
it’s being handled in Internet Explorer.

From
a web user interface perspective there are a numder of alternatives of
how to emulate frames if one wants to, and the day Internet Explorer
supports the CSS property fixed it will be a piece of cake.

So why do people still use frames? Lack of competence, or are there cases where it is motivated?

First of all, I have to tell you something funny that happened at work yesterday…I was trying to convince the girl sitting next to me to watch the movie Finding Neverland, and we talked generally about the movie. Then I went to Aftonbladet.se and copied a quote from their review of the movie and sent it to her through MSN Messenger.It read something like: “A fabolous tribute to the lust of dreaming and writing”.

The
thing was, they had talked about our intranet in between while I was
doing this, so she thought the quote was my opinion about our intranet!
🙂I really wish we had an intranet that gave me that feeling!

Which leads me to the topic of today…Naturally,
everyone wants as much internal information as possible. but are
traditional intranets the way to go? How many visit their intranets at
all?They who don’t or who rarely does, is it out of lack of time, boring intranet layout/form or “dead” content?

Something
I think there’s a lot of talk about is internal portals in the
companies, where people shouldn’t have to look for documents on a lot
of mapped network drives, but instead use a common general interface to
access all relevant information, which of course would be role-based as
well.Is this the way to go?

About 1Ã‚Â½ week ago, I met the CEO of Wipcore where he presented their new versions of their system for e-commerce.Before
I met him, I had decided to question the previous version of their
tool, where the admin interface demanded that the user used Internet
Explorer on a PC, and the fact that they had to install a DLL fix for
computers that didn’t have developers programs with the necessary DLL
files.

I was going to
argue with him that, first of all, if the system is web based, you
shouldn’t need to install extra DLL files just to be able to run it,
because then the principle of being accessible from any computer falls.

Second
of all, you don’t want to (although being an admin tool where you can
ask for different criteria from the user/administrator) demand that
they use Internet Explorer on a PC. The least you can ask for, in my
eyes, is that it is available in at least one web browser per platform
among the three major platforms: Windows, Mac and Linux. The only thing
that is needed to achieve this is that you, except for Inernet Explorer
on PC, make sure you support Firefox (who also contains support for
WYSIWYG editing through Midas).

But
before I got the chance to confront him about this, he presented their
new .NET based version of their admin tool that, lo and behold, wasn’t
even in a web browser interface any more. They had come to the
conclusion that they weren’t satisfied with the functionality and
stablility being offered in web browsers and had decided to build a
Windows application in .NET (a so-called Windows Form).

Since
the administrators in their respective implementations were so few, and
dind’t have any real interest in working in the system from other
computers or from, for example, home, they thought that a “real”
aplication suited them better.

I
haven’t really decided what I think about this yet. Part of me is of
the principle that as many things shall be web based and don’t require
any installations, that the only thing needed is a capable web browser.
On the other hand, I’m aware of the fact that, among other things, the
functionality that a Windows aplication offers can’t really be matched
by a web browsers.

So,
the question is: Have they chosen the correct path or not? Are we
trying to create too advanced solutions that web browsers aren’t
suited/ready for, or is it lack of of knowledge and competence that
results in companies avoiding web based interfaces?

Excuse me for generalizing now, but basically every girl/woman I’ve worked with has either been a developer with the ambition of becoming a project manager, a project manager that previosly was a developer. Why do all women want to become project managers?

We’re not talking about some woman that has gone through an intensive course, started to work and then realized that it didn’t suit them. No, no, we’re talking about women that have studied different technical educations in universities during many years. Did they get tired of it already while studying but went through with it and completed their education just to have an examina to lean back on? Or did they always nuture the plan of just moving another step into a leading role?

What I find a little strange about this is that women (the one I’ve met, at least) usually have a talent for logical thinking and are perfect in a developer role. But very rarely do they take the step and become system architects, instead they turn to becoming project managers.

Recently, I’ve been moving towards an attitude that I want to satisfy as many users as possible, which means that everyone should be able to see and use the web sites I build. To me, it feels kind of like a Google philosophy, to reach as many users as possible with really easy to use interfaces.

It has gone so far that I even avoid JavaScript enabled in the user’s web browser (people that have worked with me previously problably won’t believe this, I love JavaScript!). But it’s more about what it’s worth, that one doesn’t use functions, scripts, plug-ins etc just for the sake of it, but to actually use it when it is motivated and gives a necessary enchancement to the web site/page.

I mean, how many times haven’t one done very advanced things on a web page, that one has been particurarly pleased about, but then it looks different on another computer, doesn’t work in a third one just because, for instance, script is disabled and so on.

No, I have moved more in a direction where, instead of using advanced functions in the client’s web browser, try to create web user interfaces that are managed through CSS and where content and its looks is totally separated, as in the brilliant example CSS Zen Garden, where evry page has the same HTML and the CSS takes care of everything that has to do with how it looks and its layout.

Also, I really like when web sites gives the user an option to change font size for the current web site/page without the need of going into the web browser settings just to achieve that.

I’m also of the firm belief that to reach the major audience (i.e. “all” users) then it’s vital to make it as easy to use as possible for them., I believe most inexperienced users is bothered/discouraged by texts like:

“Optimized resolution for this site is 800*600”

“You need to have JavaScript enabled in your web browser to be able to use this web site”

“You have to install Flash to hear our epileptic music and to see our bouncing circles”

I think the future is to follow the W3C recommendations that most web browsers have a pretty good support for today (except for, mainly, the PC version of Internet Explorer) to reach as many users as possible.
To start thinking about the end user and show them respect, instead of just complaining about their lack of knowledge and thinking that they’re ignorant.

Google Desktop SearchA local search engine to index and search through your own computer in the fastest and most efficient way.

Another
very interesting thing Goggle have done is hiring the Lead Engineer for
Firefox, Ben Goodger, although he coninues to work with Firefox while
being paid by Google. Read more about this at Spread Firefox, c|net, Ben’s blog och Kottke.

My
personal hope is that Google develop their own web browser, based on
the Gecko rendering engine and Firefox. I think it would be good for
the web browser market and I’m convinced that it would be noticed by a
lot of people and have enormous marketing potential. Googles name is
more recognized by the general public (that aren’t internet nerds) than
Mozilla (or Mozzarella, as my girlfriend calls it).

This
is just a small pick out of things Google are up to, and it seems that
they have an ambition similar to Microsoft’s: to become such a vital
part as possible of the computer user’s everyday use.

I suffer from lack of motivation. I mean, it doesn’t just bother me, I suffer from it. It isn’t really related to my tasks here at work, it’s just that web browsers really make me depressed.
Everything I code is tested in seven different web browsers, and, sure as hell, there’s ALWAYS something that differ between them. It’s always some pixel, always in the last web browser you look in, that ruins the day.

I’m thinking about changing my ambitions with what I want to do…
Program some more advanced things that demands a lot of logic which is tough to program, but at least then the day wouldn’t consist of: “Oh no, it pushes to the right. [really dirty word] Where the HELL did that space come from?” and so on.

I’m convinced that I would prefer working with something that makes me evolve as a person with a more logical thinking, than just having experience of what’s wrong in every web browser. All the knowledge I built up about all the bugs in Netscape 4 is really useful now…

The
problem I ran into the other day is that .NET doesn’t generate valid
XHTML. There exists a number of hacks to get around this, and then one
can set what HTML is delivered to different web browsers (basically, high-level HTML to Internet Explorer and old HTML to all other web browsers).The version of HTML/XHTML being rendered is however customizable in Visual Studio 2005.

Naturally, one wants to write valid code that validates
with its DOCTYPE. However, a colleague of mine recently asked a very
good question, whether there’s any relation between if the page
validates and if it’s being rendered correctly. And basically, the
answer is no.Of course,
if you have som major erros in your code it won’t look ok, in those
cases the validation can be a hint how to solve the problem.

But
in the big picture it made me wonder if we’re just chasing valid pages
just for the sake of it, not for the ultimate purpose: that it looks
and works the same in all web browsers and behaves consistenly. In the
case of major errors, they should be removed, but if the FORM element
has an extra, invalid, NAME attribute or not doesn’t really matter.

I’m starting to wonder if W3C have gone overboard with the recommendations,
that they’re not alwaws justified. I mean, they’ve given up on the
fabulous IFRAME element to give room for using the OBJECT tag and
setting parameters for it. This
is, of course, not properly supported by Internet Explorer, so I guess
we can estimate about five years (at least) before we can use that.

Principally,
it’s then impossible to have valid XHTML Strict and IFRAME
functionality and to target all major web browsers, but of course it
works in all of them.What’s the point of being valid then?

And so Microsoft have changed their mind…
They weren’t going to release a new version of Internet Explorer until
Longhorn, but now they’ve turned around and the first beta is expected
this summer.

So what will it contain?My hope, as a developer, is of course that they implement better support for the W3C recommendations, especially when it comes to CSS.Internet
Explorer has unfortunately held us back quite a lot, especially as of
lately when basically “every” other web browser on the market has got
better support for it.

Unfortunately I don’t think quite a lot will change in the upcoming version. Microsoft themselves have said
that they probably won’t do much about the CSS support: “We could
change the CSS support and many other standards elements within the
browser rendering platform. But in doing so, we would also potentially
break a lot of things.” Personally, I don’t find this to be a sound
attitude, kind of like “Everyone has been allowed to code the wrong way
so far, so let them keep on doing it forever.” Eric Meyer, the king of
CSS, has written an interesting post about this.

Then,
of course, I hope that they prove me wrong, but it seems that they will
“just” improve security and maybe add tabbed browsing.

I
would also like to point out that my opinions isn’t about acting
rebellious against the giant Microsoft, to me it doesn’t matter if the
name of the best web browser is Internet Explorer or Firefox (or, by
all means, Safari). I personally changed to Internet Explorer when the
lousy Netscape 4 hurt the market SO much, and then to Firefox when it
turned out that Internet Explorer was just stagnating.

The
ultimate situation would be if all web browser vendors made sure they
follow the existing recommendations, and then it would be up to the
user to choose the the web browser that offers the interface and extra
features that they prefer.Basically, it’s like that right now, except for Internet Explorer that are lagging 3-4 years behind the others…

Of
course Microsoft’s decision is understandable from a business
perspective. Security flaws can make users to stop using Internet
Explorer out of fear, but better support for, for instance, CSS is
scarcely a motivator for the home user.