The SitePoint Forums have moved.

You can now find them here.
This forum is now closed to new posts, but you can browse existing content.
You can find out more information about the move and how to open a new account (if necessary) here.
If you get stuck you can get support by emailing forums@sitepoint.com

If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Better JPEG standard due in 2009

JPEG XR, a new image format created by Microsoft with many advantages over JPEG. It will probably released during 2009. Microsoft hopes of course that JPEG XR will become widely used, but it faces the huge challenge of replacing the widely used JPEG.
What are your thoughts? will it have any success?

HD Photo is not all that new, though it is newer than both JPEG2000 and JPEG. A compressor/decompressor has been included in the .NET runtime for a couple of years already.

What is new here is that:

* The Joint Photographics Expert Group have been considering HD Photo, under the name "JPEG XR", as one of their new standards for two years but it has recently entered "final phases" of standardisation.

The linked article mentions that HD Photo has "a richer colour palette". This is untrue. With flexible support for virtually any colour space, JPEG can do any colour. The author has somehow confused a greater bit depth with the ability to reproduce richer colours, which is false. What it does mean is the ability for more intensity detail to be contained in the image - way more than the human eye can see, but which can come in useful when heavily manipulating an image later (such as a middle step when trying to do "HDR" photos).

HD Photo provides greater than 8 bits per channel of bit depth before compression and after decompression. In fact, it will allow virtually unlimited (ie 32 bit float), but this is unlikely to be used by many. Its claim to fame is that it supports 16 bit per channel, and an alpha channel. However, it still has roughly the same lossy compression algorithm as JPEG, albeit with a slightly more efficient lossless compression added afterwards.

The JPEG compression algorithm, being lossy, starts out with a bit depth of 8 bits per channel. Then, this bit depth is effectively reduced by a process called quantization. The quantization reduces the size of the values by dividing them by a constant and throwing away the remainder. Some of the remaining numbers that were small before will now be zero, and can be thrown out. The rest will be compressed using huffman coding, which effectively means that each value can take up a variable number of bits.

This quantization is not done on a per-pixel basis, because before quantization the image is chopped into 8x8 pixel squares and each one is translated into the 2D frequency domain, where each value is not one pixel, but each value is a coefficient which goes towards recreating an 8x8 2D intensity map. Therefore, the quantization process does not exactly translate to a certain reduction in bit depth for pixels. It means that some frequencies, such as high frequencies will have more reduction than others. The lowest frequency coefficient is effectively the average value of an entire 8x8 square. This frequency coefficient undergoes the least amount of quantization, because the eye is most sensitive to subtle changes in intensity in the lowest frequency parts of an image.

So, it's not really possible to say that a JPEG has a bit depth of 8 bits per channel, because the amount of intensity information is reduced somewhat from this, and it is reduced a lot more in the high frequency areas of an image than in low frequency areas. It is possible to say, however, that before compressing and after decompressing a JPEG image, it has a bit depth of 8 bits per channel - however, the actual intensity detail in the resulting image is lower than in the source image. As far as the human eye is concerned, it still has the same amount of visible detail that the original, uncompressed 8-bits per channel image had, because the quantisation removed detail the human eye is unlikely to notice (this assumes, of course, a reasonable quantisation level or 'JPEG quality level' was chosen).

Where HD photo comes in, is that a source image may have greater than 16 bits pixel depth, and the resulting image after decompression may also have greater than 16 bits pixel depth. However, the compression process (if lossless compression is not used) will still result in that intensity information having been compromised in between, in ways which a human shouldn't really notice.

The human eye is surprisingly poor at seeing subtle changes in intensity. Assuming a normal gamma value of 2.2 (used in most colour spaces), 8 bits per channel is more than enough for us in most cases. Even 6 bits per channel (which many LCD monitors use) is enough in some cases. However, this is assuming that we're not going to manipulate the image afterwords. Once you start manipulating an 8-bit per channel image with curves in order to bring out extra detail and max the contrast, you can start seeing the effects of posterisation (or just more noise). This is where a greater bit depth comes in handy.

The RAW image data from a typical DSLR camera is usually 12-bit, but that is at Gamma of 1. It MUST have gamma correction applied for it to be shown anywhere or converted to any other format. After gamma correction, the effective bit depth is barely over 8 bits per channel in the darkest regions of the image, which is where the human eye is best at seeing small changes in intensity. It can therefore be said that you need 12 bits per channel in the RAW in order to be able to have 8 bits per channel once gamma correction is applied. So the idea that you would get more bit depth information from a RAW image converted to 16 bits per channel than an 8 bit per channel original is not strictly true. It is true in the brightest regions of an image though, where we are less likely to notice it.

Originally Posted by felgall

It would have to offer significant benefits over PNG for it to get anywhere.

PNG is different. They aren't competitors to each other. PNG is purely lossless compression. There is no transformation into the frequency domain and no quantization. A 6 megapixel photo will virtually always come out at around 10 to 20 megabytes in PNG, because there is no lossy compression. JPEG and HD Photo can compress it much more by losing detail in the image, but still maintaining acceptable quality - down below 10MB even down to around 1 or 2 MB.

Then it deserves to catch on (Microsoft or not) - has anybody had any dealings with it before? (under its old name)

It's available in the .NET runtime so I dare say .NET developers may have used it internally.

I don't think it ever caught on too much with photographers. I am not aware of it being available in-camera for many cameras, and besides, those photographers who are concerned about bit depth will shoot RAW, guaranteeing no detail is lost whatsoever. Then afterwards, they can always convert to TIFF at higher than 8 bits per channel - it takes up more space but once it's on the hard disk that doesn't matter.

The only way I can see the format taking off is if, like JPEG, it becomes well supported in situations where small file size really does matter. Even in-camera this isn't too necessary anymore, because memory cards are huge. The last remaining frontier where image file size still matters is the web. My prediction is that without support on the web, HD Photo / JPEG XR will not be successful. Even then, the usefulness of greater than 8 bits per channel on the web is a long shot. Current browsers don't even do colour management of the existing 8-bit images properly.

There are also plenty of other examples where technically superior technology has bombed in the marketplace simply because it came late, was initially too expensive or too restrictive, or didn't offer enough of an advantage to justify the above. Take SACD, Videodisc and Blu-ray video (which is performing so poorly it will never recover its cost in R/D and marketing) for example. It's also worth looking at JPEG-2000, which has not been successful either.

I think it would be better if it does not compress each time you save it. For example if you save a photo as jpeg in ms-paint and save it several times over -the quality gets compressed and image gets distorted each time -but the file size stays the same.... so why not just save the original compression model? Why compress(and distort)the image each time you save it when you are not actually compressing it -since the file size stays the same. Obviously if you want to reduce the file size -you will loose quality, but I don't understand why you lose quality simply by saving a jpeg in the same format and compression rate/standard as it already was. I can see how it is possible this happens -I am starting to see how jpgs are so fundementally different from other formats like bitmaps or pngs, but I think it is absolutley awful, and I hope this new jpeg has a better compression method/process/algorithm -so you only loose quality when you save it as a higher compression rate.
I will continue using png for now.

I think it would be better if it does not compress each time you save it. For example if you save a photo as jpeg in ms-paint and save it several times over -the quality gets compressed and image gets distorted each time
...
I think it is absolutley awful, and I hope this new jpeg has a better compression method/process/algorithm -so you only loose quality when you save it as a higher compression rate.
I will continue using png for now.

The effect you are referring to is a fundamental behaviour of lossy compression, and is the case with all image (and audio, and video) formats that are based on lossy compression.

Whenever you save to any format that uses lossy compression, the quality of the image after decompression will be reduced compared to the quality of the image at the time you saved.

If you decompress it and edit it further, and then save it to a lossy compression format again, then the edits you made will cause a further loss of quality. The loss of quality is unavoidable because it is a lossy compression format.

This is why you should never be opening a JPEG image and performing any further edits if it can be avoided. This is a big mistake if you are using lossy compression. If you think you are going to need to edit something in future, do not save it in a lossy compression format. My recommendation would be to use TIFF or PNG instead. You should use JPEG only for final distribution of the image such as by putting it on the web, and no further edits should be made to the JPEG version.

So why is jpeg a standard format for digital cameras etc? Why not just use png and then people can convert to jpg after all the editing and final deployment of their images on their project. Surely if any image saved as jpeg is lossy, then even the raw images taken by the digital camera will have lost some quality if they save the first raw file in jpeg?

When recording audio, you get the raw file in something like .wav(not lossy), you only make a lossy format like mp3 when uploading it or something. So why does the graphics industry choose to use jpeg(lossy) as a raw standard? Why not use bitmap or png? I find png quite remarkable in that the quality seems as good as bitmap, and the file size is smaller than jpeg.

I don't know much about image files and compressing, however if it has better quality and lower filesize I'm definitely sold. I run an image heavy photoshop website, and often .GIFs just don't cut it when you have gradients or lots of contrast, and I'm forced to use JPGs to show its quality.

So why is jpeg a standard format for digital cameras etc? Why not just use png and then people can convert to jpg after all the editing and final deployment of their images on their project. Surely if any image saved as jpeg is lossy, then even the raw images taken by the digital camera will have lost some quality if they save the first raw file in jpeg?

When recording audio, you get the raw file in something like .wav(not lossy), you only make a lossy format like mp3 when uploading it or something. So why does the graphics industry choose to use jpeg(lossy) as a raw standard? Why not use bitmap or png? I find png quite remarkable in that the quality seems as good as bitmap, and the file size is smaller than jpeg.

The consumer camera market uses JPEG because most consumers don't have the software to view RAW image files. The professional camera market uses RAW and not JPEG.

Logic without the fatal effects.
All code snippets are licensed under WTFPL.

Thanks. That makes sense. I do not own a professional camera yet so that is probably why I thought they all used jpgs. That is probably a good indicator if the camera is good or not(if it uses raw or jpg).

So why is jpeg a standard format for digital cameras etc? Why not just use png and then people can convert to jpg after all the editing and final deployment of their images on their project. Surely if any image saved as jpeg is lossy, then even the raw images taken by the digital camera will have lost some quality if they save the first raw file in jpeg?

When recording audio, you get the raw file in something like .wav(not lossy), you only make a lossy format like mp3 when uploading it or something. So why does the graphics industry choose to use jpeg(lossy) as a raw standard? Why not use bitmap or png? I find png quite remarkable in that the quality seems as good as bitmap, and the file size is smaller than jpeg.

Digital cameras have traditionally used JPEG for a couple of reasons.

- JPEGs save a lot of space, and historically memory cards have been of very small capacity (anyone remember 256MB cards?) The space saving property of JPEG was originally seen as worth the ability to fit more on a card.

- JPEG is a standardised, open format and is supported by a lot of software, whereas raw is not a single format but a general term for any one of hundreds of proprietary formats which until recently required specialist software licensed only by the camera manufacturer.

The first reason is no longer relevant. 16GB memory cards are commonplace now, so there is no need to compress the images.

The second reason is still relevant, but is becoming less so. Raw images are still largely proprietary formats and incompatible with each other, but there is now widespread free and open software (the dcraw library) that can decode virtually all existing raw formats. Not to mention tools like Photoshop Lightroom which make working with raw files much easier.

However, there is a third reason that is becoming more relevant today:

- Consumers, for the most part, can't tell the difference and don't really care. Fair enough too, since if you don't need to do much further editing then a good JPEG is perfect, even for a professional. There are some professional photographers who swear that JPEG is all they ever need, and that they don't need the ability to do extensive editing afterwards because they get everything right in camera like in the 'good old days'.

The JPEG image that a camera generates does have lossy compression, but the loss is very minimal. It's like putting Photoshop on its very highest quality setting. The quality of a JPEG from a camera is typically much higher than the default quality level when you save a JPEG in a software application, so it is pretty good. Nonetheless, it still misses some detail present in the camera's internal raw format.

To clarify, consumer-level digital cameras such as compact digital cameras usually support JPEG only, whereas pro-level digital cameras such as higher end compact digital cameras or DSLRs give a choice between JPEG and raw (and sometimes both at once).

In the future if the DNG format (a raw format that is an open standard, not proprietary) gains support by camera manufacturers then that will significantly improve the range of software that can handle RAW images and it can be another reason for camera manufacturers to move away from JPEG.

As I've said, JPEG's main benefit is in situations where the reduction in file size justifies the quality loss - the web is one of the last frontiers where file size is important to that degree.

Those of you who are saying that it will probably succeed, what basis do you have for saying that?

Do you know how many file formats there are that have never made it mainstream? What sort of dominant and leading position does Microsoft hold in the digital photography or graphic design industry? What problems does HD Photo solve that aren't solved better by other established technologies?

I notice that of all four poll options, there is no option for those of us who think it's unlikely to succeed. All four poll options implies that it is likely to succeed (though option 3 says is unsure about the near future). If there was a poll option for "probably won't succeed" I would choose that. It's not because it's Microsoft, it's because there are already a bazillion formats for digital images and the odds are against them, and unlike Adobe, Nikon or Canon, Microsoft are not really in any position to use their market position to force adoption of something like this. Coupled with the fact that the need for lossy compression is shrinking and TIFF/JPEG is fine for just about everything else.

Those of you who are saying that it will probably succeed, what basis do you have for saying that?

Do you know how many file formats there are that have never made it mainstream? What sort of dominant and leading position does Microsoft hold in the digital photography or graphic design industry? What problems does HD Photo solve that aren't solved better by other established technologies?

they could feature it in their newest version of windows? this would boost the use of these images massively if for example the new version of Paint will have the standard saving output set to the new JPEG format.

I don't think its even about the software, RAW images can be upwards of 40mb which can cause some computers to struggle even modern ones! However you dont normally get the option to take pictures in RAW in your everyday "point & shoot", this feature is normally on the more professional cameras.

Thanks for clearing this up for me. So much for my theory. Paint was just an example however, and who knows they might still change this, they could use the format in windows 7 nonetheless

A lossy compression format would not be a good idea as the default output format for paint - it would need to be a lossless format. Even if it were desirable, paint does not (to my knowledge) support alpha channels or greater than 8 bits per channel and so HD Photo/JPEG XR would not provide any significant benefit. These types of features are the domain of professional photo editing software, and Microsoft doesn't really enter into that market. Professionals already use Adobe, and their Nikon/Canon/etc cameras and scanners, and already use formats like RAW, TIFF, and PSD, which are lossless and yet have all the bit depth of HD Photo/JPEG XR, and occasionally JPEG, because despite the lower quality it's supported everywhere.