can someone explain to me why the uk/us custom is to write numbers as .5 instead of 0.5?
I would think that the decimal point in such notation is quite easy to miss, so a quick glance could mistake if for a 5.
So is it beacause people were just lazy to add a redundant zero infront of the point, or is there some other thing that I am not aware of?

I personally teach or use in my writing, the use of the leading 0 where appropriate.

However, I don't think it's lack of use is caused by laziness. It is more related to the fact that a leading zero is not used in other situations. Would you write 050 for fifty? Would you do so if you were summing five hundred to fifty ((500 + 50 ) = ? Or is it (500 + 050 ) = ?) In fact, from this point a case could be made that 0.5 is as confusing to the eye as .5

Especially children in the earlier grades get confused enough by decimal points, without adding another 'special' case. Having said that, sometimes teaching a particular student to use or 'imagine' the leading zero aids in their understanding.

Disclaimer: I am not a certified teacher, but most of my family are. I have been in the classroom as a sub.

I am aware of how it works nowadays, but I still can´t get to the bone of how it became that these two different notations emerged.
My perosonal guess would be that it started as a difference between british and continental notation, but I don´t even know how would i google that to find something more. Any clues?

Especially children in the earlier grades get confused enough by decimal points, without adding another 'special' case. Having said that, sometimes teaching a particular student to use or 'imagine' the leading zero aids in their understanding.

Click to expand...

That is the point I saw somewhere when googling, and I don´t get it either. Why would you need to imagine a leading zero? It is just there and always has been.

To me it is natural that 0.5 is somewhere between 0 and 1, and 1.3 is somewhere between 1 and 2. Now a .5 seems like a special case where the 0 is ommited just "because", while all other cases remain different.
edit: could that be the reason why kids get confused with decimal points as you say?

{omitted}
To me it is natural that 0.5 is somewhere between 0 and 1, and 1.2 is somewhere between 1 and 2. Now a .5 seems like a special case where the 0 is ommited just "because", while all other cases remain different.

Click to expand...

That is my point. To you what is natural, may be strange to others. I have seen frustrated people trying to understand why a number should EVER start with a zero. That numbers between 0 and 1 should be treated differently (from THEIR perspective) is confusing. Their perspective is that there are 'gazillions of numbers' and they just don't start with a zero. That is why I asked why couldn't someone write 050?

To me 050 can only be the result when you are substracting 75 from 125, writing the numbers in rows. Otherwise it is missing the decimal point. - here you can see the advantage, it is much harder to lose a zero due to bad handwriting or poor eyesight, than to lose a decimal point and make 50 from .50

I of course understand there is another perspective and that none can be called more correct than the other, my original point was more about trying to find out why did there come to be two standards, and what where the reasons for the difference.

I can testify that, clear back to the 1950's the leading zero was not taught in American schools. That doesn't say anything about, "why", only the fact that it has been accepted practice for nearly 60 years.