pixels

Did I say, in my previous post, that I wanted maximum sharpness? Well, NASA has the ultimate: a one-billion pixel composition of hundreds of photos taken by Mars rover Curiosity. You can find it here. May take some time to load… 😉

I try to limit the hardware craze: a good camera does not have to be traded in for a new one just because the Joneses have one; nor do old lenses. Still, I have upgraded from 6 to 12 to 24 megapixel sensor cameras, and my lenses were upgraded as well to match the increased quality. Similarly, my laptop does not have to be traded in for a new one just because there are new models available. Even with 24 M pixel pictures, my laptop still worked well. A bit slow, but hey, I’m not a pro who has to process hundreds of pictures a day, so let’s be a little environment-conscious. My laptop is 4 years old now and could do well for at least another year.

And yet…

An article on PCWorld, Laptops with super high-def displays worth price | PCWorld convinced me that the ‘retina screen’ is not a hype, but a true innovation, because you don’t see individual pixels anymore: ‘Suddenly, there’s harmony. Your eyes actually see what your brain wants to see.’
‘Suddenly, it’s possible to look at a screen and for the first time ever, see no pixels, no “jaggies” (the jagged edges of pixelated raster images) and no gray boundaries around letters. We’ve crossed some kind of line.’

Now I want a new Apple MacBook!

Let’s wait a month, and see if the urge is still as strong. That’s a famous trick to avoid impulsive buying… but this seems a serious candidate!

Like this:

A conference room in an Arabian country. The contrast with the pictures shown here in the last couple of months could hardly be bigger. Monty Python’s famous phrase “And now for something completely different!” certainly applies!

Plenary session in ANQAHE's first conference

For the photo camera fans: this is the first picture I enter here taken with the new Sony A77 camera and the equally new 16-50 f/2.8 objective, and that is something completely different, too! The electronic viewfinder takes some getting used to, but has some great pluses: it shows the actual brightness of the picture you are going to take, so you see if in the conference room some exposure compensation is necessary; it gives 100% of the picture so there are no surprising heads entering the edge of the picture, etc. Besides, the camera is incredibly silent and fast, since there is no flapping mirror. The 2.8 lens is a joy to work with, as well: with the large amount of light that it picks up you need the flash less often (in combination with the camera’s 24 megapixel sensor, making ISO1600 a very acceptable high speed to work with), and the lens also seems to have very little distortion compared with my previous Sony 16-105 one.

Another technical advance of the A77 over the Sony A700 is that the RAW mode seems to interpret colour temperature much better in the automatic white balance mode, making post-processing a much lighter and faster exercise too.

Did I say somewhere that there is no magic in ever larger numbers of pixels? I just bought myself a new camera, a Sony A700, with 12M pixels, double the number of my old & true KonicaMinolta 7D. And the first picture that I printed is amazingly sharp, even at A3-size! (A little example will be added in due course–it’s in the next entry.)

But the real puzzle with this new camera is the DRO, the ‘dynamic range optimizer’. It’s supposed to give more detail in dark areas of pictures, before they are compressed in-camera to JPEGs. A nice test is given by one Gamin, with a range of pictures using different settings. If I understand things well, what it does is changing the tonal curve that you encounter in your Gimp, Photoshop, Lightroom, Aperture or what have you: it makes the shadows and darks lighter. It can’t add to the total range of tones, can it? If that is true, it can’t be of any importance to the RAW photographer, right? Then why does Sony add a dynamic range thingy in its RAW conversion software for the PC? If there is anyone out there who can explain the logic to me, please do!

Until then I uninstalled the Sony converter, because it has a hopelessly cluttered set of panels to make adjustments to the RAW picture, which then is saved as a TIFF file. I like Lightroom’s ‘virtual’ changes much better (as well as it’s clear screen layout, once you’ve changed the funny panel end marks to simple boxes), storing all my adaptations and post-processing, but always giving access to the original RAW file until I decide to export the file in the format I wish.