New to Panasonic's G9 flagship is a high-resolution mode, which shifts the sensor by half-pixel increments eight times, and generates an 80MP final image. As with similar technologies from Ricoh and Olympus, it's not necessarily recommended for scenes with moving subjects in them. But we wanted to see if we could make it work.

You'll notice in the above image, the pedestrians are sharply 'ghosted' in the foreground; this is due (obviously) to the eight exposures being taken, but also partially the 1/500 sec shutter speed. What if we purposely chose a slower speed, so that they would blur more naturally into each other?

These are only initial findings on a gray Seattle day, but we've got some interesting results.

Panasonic Leica DG 8-18mm F2.8-4 | ISO 200 | 1/30 sec | F8

For this situation, in order to get a proper exposure without either an ND filter or stopping down to diffraction-inducing levels, I figured I'd give 1/30 of a second a try. As you can see, there's a little 'repetition' around portions of the pedestrians in the foreground and across the street, and while there's lots of detail in the scene, you may want to just use the normal 20MP file for this one.

What if we go with a little longer of a shutter speed, though?

Panasonic Leica DG 8-18mm F2.8-4 | ISO 200 | 1/8 sec | F8

This looks to our eyes to exhibit some improvement. We overall found that a shutter speed between 1/4 sec and 1/8 sec gave a reasonably natural look to the average pedestrian in motion - of course, for faster and slower moving objects, you'll have to adjust accordingly. Do take note, though, that there are some interesting colorful streaks in our moving subjects, and a reduction of resolution in static objects that can be seen behind them.

If you're thinking about an even slower shutter speed, once you get down to 1/2 sec or so, pedestrians largely just disappear from your frame, leaving barely a shadow for you to notice. Of course, this could be an advantage if you're wanting to eliminate people from your photos, without necessarily needing an ND filter and a 30-second exposure.

There were some people on these stairs, I promise.Panasonic Leica DG 8-18mm F2.8-4 | ISO 200 | 1/2 sec | F8

We tried an even longer exposure to see if we could get the motion artifacts to 'disappear' with subjects moving fast enough across the scene, but we still could see some - check out the car taillights and the ground surrounding them in the below image. The rest of the image, predictably, shows good detail, but once you start inspecting the areas of motion too closely, the image starts to look a little strange. That said - you'd probably have to have someone point it out to you to really notice it in real life.

Panasonic Leica DG 12-60mm F2.8-4 | ISO 200 | 1/1.3 sec | F4

In any case, the high res mode on the G9 is something we want to continue to look into as we progress with our review. Raw support is coming shortly, and we're looking forward to examining the Raw files from both real-world shooting as well as our test scene.

For now, we've added these images and their corresponding 'normal' 20MP equivalents onto the end of our existing image gallery for you to inspect.

Scroll to the end of our sample gallery to see our updated high res images

It amazes me how everyone gets worked up against the pixel shift feature, mostly complaining that it has limited use. But what camera feature does not have limited use?

Something as simple as a 1/8000 second shutter speed exists on many cameras but is useless in most circumstances. But you don't see someone testing it in a dark room and then complaining that the image is black. Testing pixel shift with moving objects is like this. "Oh, look: artifacts." Of course there are. You used it for the wrong thing.

this is what Hasselblad started, if i'm not mistaken ... then Olympus and Pentax followed ... and it's now becoming one of the standard options in most cameras ...

still however, i'd rather use a large sensor with a true high resolution of 80MP to deliver better results in one single click of the shutter button ... not even PhaseOne offers that though, unless via similar interpolating tricks, which has been available via software since the 1980s!

Pentax K-3 II came April 2015. Few months after Olympus E-M5 II.But Hasselblad had H4D-40 that shot 200Mpix 4-frame (true color) in 2011.

But then again digital cameras development is slow. Olympus started E-M5 (original "OM-D") who know when, maybe around 2007? As when Olympus was in the 4/3 and E-P1 wasn't out (first Olympus m4/3 system body) it had a E-M1 and E-7 (4/3 DSLR) in development and Olympus had internal testings that does it release a new E-7 model for 4/3 system, or does it go to new m4/3 system with E-M1. And this was way before "OM-D" was out 2011.

Then at end of the 2011 Olympus started to develop a E-M1 II, that got released 2016. So five years development.

The E-M5 II development has started 2010-2011 or so and had a few years development time.

PEN F might have very well been in development since 2013 as E-P5 was last top PEN model and they didn't release new PEN line E-P model to compete PEN F (PEN F is separate model from E-P line).

Pixel shift, as an idea to mitigate Bayer problems, is not really an invention. It is obvious. The invention is how to implement it. And Hasselblad was first.

BTW - the Hubble pixel shift is a red herring. Hubble do not have any Bayer detectors. Hubble is doing pixel shift for totally other reasons. If I am not misremembering it was to fix dead pixels. Or maybe it was to increase resolution. Also. Anyhow - it was not to fix Bayer problems.

So @Androle, it is a pity you have got 6 likes, when you are wrong. Adding to Internet misinformation :)

yes, it's new but i still stand corrected: this is the hardware version of the same old software trick ... probably works slightly better in results too, as can be seen via practical tests in both fields ... (i have tried it in software myself occasionally but not much of a fan really ... don't quite like the hardware version's results either ...)

eventually though, a true 80MP sensor with the right number of pixels (as well as the correct pitch etc) is surely better than emulating the same number of pixels via software or hardware tricks such as this ... nothing wrong with having the option on these cameras though ... yes, why not ... may surely comes in handy for certain types of jobs and experiments ... :-)

No @Roland Karlsson - this IS the hardware version of the same old software trick. agreed though, it's needed, but not 100% really! :-)_ _ _

theoretically, AND practically, something similar is possible with film / print material as well, done many times in the past on a regular basis: a photo shot on 135 format film (or even smaller) can be printed at its optimum best-results size (say 8"x10" or a little bit bigger or smaller) and then re-copied using a large format film (using special copy film materials, filters, and processes) and then enlarged billboard size with amazingly high quality! (more on this on the comment below under star *)

still however, even with film, this trick had to be done mostly / only when there was no large size film material shot of the subject available ... (as in sports photography for example, where 135 format was / is mostly used instead of 4x5" and larger, or even the smaller / lighter medium format camera such as 120 and so on ...)

* best results for this kind of trick back in the day, as early as 1960s and 1970s, and possibly well into the 1980s and later just as well, was via scanning a finely exposed and processed transparency, such as Kodachrome, which was available in 135 format only at the time ...

so, still digital tricks were at work then too: scanners! scanners that were not only hefty in size, they were also very expensive and required just as costly hefty and expensive computers to work with too ... tens of thousands of dollars, if not more! only large business and news and publication corporations afforded to pay for the toys that big ...

even today, the same trick can be done nicely and much more cheaply and some firms still do it btw ...

...

my last comment about this btw, thanks for following up and good luck! :-)

Sorry @dprived. You have totally misunderstood. This "trick" has nothing to do with improving the picture with either supersampling or general multi exposure. Nothing at all. All your examples are totally irrelevant.

This "trick" has only to do with nullifying the mosaic pattern of the Bayer CFA filter. Nothing else. Only some very few cameras do that. Hasselblad, Olympus, Pentax and now Panasonic. Only high(er) end cameras. All of them by mainly moving the sensor one pixel.

Maybe there are some technical cameras that do it. Would be surprised.

Some of the cameras also moves half a pixel. And that is supersampling. But, the main advantage is to nullify the Bayer filter. You can see this half pixel movement as first move one pixel, to nullify than Bayer filter, and then doing supersampling.

What you are talking about can be made with a program called PhotoAcute. There you take a number of images, moving the camera slightly between photos. Then you combine the photos, increasing resolution and removing noise.

The pixel shift technology comes from the press industry where a color image is created by pressing a each color separately and slightly shifting the layout for another color.

Technique and idea that is centuries old when the book pressing was invented and some special unique color printing was required to be done, and before that it was all labor work with the screen-printing with similar idea.

What the Hasselblad, Olympus etc did, was just to reverse the whole idea as capturing light is same as creating the image by pressing color on paper but reverse.

The key innovation here is the bayer pattern. that Bryce Bayer created at working for KODAK.

And if some would remember, Olympus and KODAK both created the 4/3" format, sensor technologies and future technologies to overcome the resolution and color limitations with current bayer filtered sensors. So holding from that time the HR mode, waiting IBIS technology to improve so much.

@Tommy. Why, oh why? We all know what a Bayer pixel shift technology is. Why introduce yet another bogus explanation? The Bayer pattern consists of Red, Green and Blue detectors. If you take four images you can get all three colors in every pixel. That is it.

It has nothing to do with super resolution by taking a series of randomly displaced images.

It has nothing to do with the raster technology in process industry.

It has nothing to do with increasing resolution when scanning.

It has nothing to do with multiple images in astronomy.

You just take one image and then three more displaced one pixel left, down, right. That is all folks!

hi everybody ... interpolation of a single (or multiple) shot(s) image, or "press (Offset) printing" rotation of the CMYK layers (so the 'pixels' don't overlap) etc, are all related and the very basis of what is happening in this 'new' sensor-shift technology or trick or whatever we might call it ... but they're not the same!

the fact of the matter is, with interpolation or layer-rotation, the resolution doesn't change really and remains the same amount of pixels and only more color hues are gained, while with sensor shift, there are also some software processing tricks applied, which together do increase the resolution (thus file size) of the final image just as well ... hence a 30MP sensor yields a 100MP file and a 100MP sensor, a 400MP one in the end ... (such as in the latest Hasselblad model ...)

the results however, although quite good, are not as finely as expected really! check out the test images on all of these cameras and you'll find out ...

Yes, it is a mess. Looks like they gave a camera to a kid to try out. It's embarrassing that a camera oriented website would test this feature with moving subject. It distract from the real purpose of this.

Sigma DP3 Merrill has the Foveon sensor which produces all the resolution required in a single exposure without having to "shift" the pixels. 16 bit file is 88.6 megs after processing and converting to tiff.

Much lower noise, much better colours, somewhat better DR. If it is not interesting why would anyone ever use a real 45-100 MP cam? It is not for everything, but on my EM1.2 (which does not introduce such artifacts btw) for landscapes it is about FF-MF IQ in a mft package. I like it.

Its not who brought it first, its who makes it better and sets it as a standard usable feature. Just like Live View according to Phil Askey "A solution looking for a problem" after that it was history.

@MrALLCAPS: Panasonic focus so far was video and GH series were it's cash cow. Olys focus always was still shooters what reflects lens line, IBIS, pixel-shift. Now Pana is taking on Oly with IBIS, pixel-shift and G9.

@peppermonkey I don't use zooms at all. And for what it is, Leica badge and all, that lens is TOO expensive. I can buy, we'll going to -the Fujinon 16mm f/1.4. Right now, it's $400 cheaper, we'll built, weather sealed and apeture ring.

In my opinion, if M43 wants to survive, they need to be cheaper and smaller. I was looking into getting another Lumix as I still have my favorite primes for it, but price is stopping me.

So a quick calculation indicates that at 80MP, diffraction limiting is f2.8. So realistically, best suited to lenses like the longer Voigtlander f0.95 trio (not the 10.5mm), the Nocticron, the O75mm/1.8, and perhaps the Sigma 30mm/1.4 stopped down to f2.8 where they achieve extremely high MTF values.

However, since the high-res mode is most appropriately used at 40MP, not 80MP, that brings your diffraction limit up to f4. Which opens up a lot more options, and means that something like the O12-100/f4 at the wide end might be good enough, as well.

Yes, the diffraction is in the image projected by the lens. The lens' projected image is stationary, with the sensor moving to sample more of that image. But since the projection itself is diffraction limited above a certain level of detail, there is no benefit that comes from increasing the amount of sampling you do with the sensor, no matter how many pixels each individual image happens to resolve. You could take 100x samples with a 2MP sensor and the result would be the same.

In diffraction limited cases, the main benefit to the M4/3 multi-shot mode is not from the 4 shots that do the 1/2 pixel offset sampling, but rather from the 4 shots that provide full R, G, B information to the basic 20MP, avoiding Bayer demosaicing artifacts, interpolation errors, and colour information loss.

Diffraction is a limit imposed by the aperture and projection size only. Multiple exposures cannot get you around diffraction unless you're substantially moving the lens to create a sudo-aperture that is larger (for example an array of small lenses behave kinda like a big one).

Irrelevant. Diffraction limiting means just that - it becomes a factor on a sliding scale. A 'diffractionlimited' f5.6 image will still have better resolution than a non-diffraction limited f5.6 20MP image.

All this blather about 'ooh, it is useless because of diffraction limiting' is spouted by people who don't really understand what they are saying.

Seems yor calculation was a bit too quick. At f4 the diffraction limit is just under 500 line pairs per mm or 1000 pixels per mm. For a 17x13mm m43 sensor that would correspond to just over 220MP. Of course, even at 80 MP diffraction will affect MTF appreciably at f4. Good thing many good m43 lenses perform as well or better at f2.8 as f4. Turning it around, these images are proof that The 80MP mode, regardless of tolerances in sensor repositioning and vibration, produces much better results than the 20MP, even on a normal zoom.

"Turning it around, these images are proof that The 80MP mode, regardless of tolerances in sensor repositioning and vibration, produces much better results than the 20MP, even on a normal zoom."

Most definitely. Was that ever actually in dispute?

At any given aperture, more MP will basically always look better. There are enough variables in this equation than the effects of diffraction are only one piece of the imaging chain, and rarely the most influential one.

Not using a sturdy enough tripod with a rigid connection from the camera to the head, not using it on totally solid ground, and not using a remote shutter release would likely have a much bigger effect in these images than diffraction at any of the apertures (up to f/8) in question.

FWIW, while Panasonic doesn't actually say it, based on the sub-pixel movement, number of photos, and end-result resolution, they are using "super-resolution" techniques. Because of this, diffraction limits as discussed above do not really apply... or, I should say, only apply to the 20MP sensor's resolving ability.

In that evening cityscape, you can actually see the green lettering on the monitor of some guy sitting in his office.Must be quite nice to use 2 G9s on a rig with iZugar fisheyes in HiRes and then have that on a VR, e.g. in the alps or from a tower...

I don't quite understand why people are pixel-peeping an 80MP JPEG and expecting it to compete at a 100% crop with 36-45MP images from FF cameras processed from RAW.

At the very least, we should wait and see what the RAW results look like. But even more realistically, we should look at a downsampled version of the image. There simply isn't 80MP worth of data with a half-pixel overlap. If I'm looking at this correctly, it should be more like 1.5^2*20MP, so 45MP. This is why Olympus gives you a 50MP JPEG only, and why Panasonic has a 40MP option with the G9.

That is the image that you should be looking at if you want to compare it with a magnifying glass to an A7r III or D850.

Delay is useful but one cannot know whether movement of the cable release in hand or pressing of the shutter release button will excite slight (minute) movement of the camera that exceeds the duration of the delay. It will depend on the focal length and size-weight-balance of the camera-lens, the sturdiness of the tripod-head, whether a battery grip is used (not recommended), and other external factors such as wind, flexing of the floor/ground, etc. It's important to take extra shots, just in case of unknown movement.

A good test is to go outside, set up the camera on your tripod with a long lens such as 150mm, focus on a distant object with live view 14X magnification, and tap the camera to see how long it takes for it to settle. When I did this using a modern high-end tripod and heavy duty pan-tilt head, I was surprised at how long it took.

This is probably the dumbest test Ive seen on DPreview, these guys tried so hard to be "scientifically innocent" by using a mode strictly for static objects to shoot people walking, moving cars, long exposure... and claim that the result is weird. lol.

I guess the next thing to happen is a meeting with Panasonic PR team and another click bait article to explain the pixel shift use cases. Everybody wins. LOL

The sad thing is not that it cannot do what it should not be able to do. The sad thing is that it cannot do what it should be able to do. It has some faulty algorithm for using subsampling for increasing resolution. It can be seen along all dark thin lines. They have a constant width area that is slightly brighter. Not like usual sharpening halos, but like something that looks extremely artificial.

@CarolHa - yes - it might be a firmware problem. It is a bit strange though that they let DPR make this test with this firmware. If it is the firmware that is the problem. It is very common that testers are not allowed to show pictures if things are not finished.

My experience with the Olympus hi-res files is that you get most of the advantage by processing the RAW file. Those 80mp files can take an extraordinary amount of sharpening and processing. Olympus also provides a single low-res frame with the RAW, which allows you to up-res it, layer it in Photoshop and use it to patch the moving elements. I'd imagine Panasonic's files will be the same.

Huh? This camera gives you a JPEG if you want it, just like Apple and Google smartphones do. But you will still always get a better result if you process it yourself from RAW. This is the case with smartphone images as well. Only in the most recent generation of cameras like the Pixel and iPhone 8 have the auto-HDR algorithms become sophisticated enough to be better than a processed DNG. And that's only because smartphones have so little dynamic range in an individual image. That's not the case with 1" and larger sensors.

otto k: The Olympus E-M1 mkII has the option for automatic motion compensation and also applies it to the RAW as well as the jpeg. The issue is that it doesn't always work 100% (e.g. fast moving cars) so having a separate non-hires raw/jpeg is great to fix those areas manually.Also it looks like Panasonic applies some sort of motion fix as the usual artifacts aren't there - again not 100% accurate, fixable with the additional non-hires shots.

Same here. I am very happy with sharpening up etc and then downsizing to 60 MP or so. Also in low light the colours and noise are really several stops better than the single shot file (which as you say is in there too with the Em1.2).

So a quick calculation indicates that at 80MP, diffraction limiting is f2.8. So realistically, best suited to lenses like the longer Voigtlander f0.95 trio (not the 10.5mm), the Nocticron, the O75mm/1.8, and perhaps the Sigma 30mm/1.4.

However, since the high-res mode is most appropriately used at 40MP, not 80MP, that brings your diffraction limit up to f4. Which opens up a lot more options, and means that something like the O12-100/f4 at the wide end might be good enough, as well.

@Roland - when we move the sensor but the lens is fixed, we are sampling the same image with small offsets and we cannot get more than the lens is projecting. But if we move the camera by a pixel (or half) the sensor actually sees a different image as the lens sees a different image. Since we cannot move the camera so precisely we can emulate that by moving the lens (by using OIS mechanism - I know it's a closed loop system not designed for this, but maybe it is possible).

I was just wondering that maybe the solution to blurred parts would be combining two images from 80mpx and normal 20mpx upscaled to 80mpx. The blurred parts could be erased revealing static images underneath. I know that this is not a perfect solution but it might be better than having a blur ? Has anybody tried this ? and if yes , how would they rate it ?

Hi-Res mode: I think that 1 from 1000 of us would find this useful. In urban areas just everything is moving..( with the exception of maybe Chernobyl and Fukushima) and results in blurred pictures. The same in landscapes (-at least mines) there are trees grasses, flowers , birds everywhere I go, and mostly the wind is blowing...Blurred pictures. Maybe the ancient Egypt and Petra with no trees around would be a good subject..For me it's easy, just grab my 7RII, true it's just 42Mp not 80, but in ONE shot!For large prints ( 2mx6m) for advertising a product, fashion, food, new gadgets etc may be useful..

While I don't use it a lot on my Olympus E-M5ii, I have successfully used it with long exposures and moving water in cases where I could not use an ND filter (as I don't have one for my 7-14mm lens), and obtained very good results that were more pleasing than a single exposure in the same conditions. The extra detail, reduced noise, and colour accuracy were welcome in the static parts of the scene. If I'm already shooting with a tripod, it only takes a few extra seconds to use HR mode, and I always have a regular RAW file to use for blending, or if I don't like the HR result. I now have my HR settings assigned to the mode dial for quick access.

Since the new versions of DFD on both GH5 and G9 (480fps vs the previous 240fps) focusing has improved, the question arises: will our "old" V1 lenses work fully on this new cameras or we have to throw them in the bin- or just use them on the old cameras...My guess is that the new V2 lenses received a faster controller to cope with the high speed and precise positioning of the af motors and the optics remained the same - but since Panasonic has released this document promising firmware update- it may be the case only of a new firmware update would be needed - which did not arrived yet.. But will that ever arrive? Or will we be forced to buy new lenses to benefit from the precise and fast AF and other improvement? Or let's ask differently: what will we loose using the old lenses with the G9 ?

On page 21, there is a listing for lenses compatibility for the 5 axis dual IS and 5 axis dual IS 2.

Panasonic promised a firmware update in 2017 for the "old" lenses (which did not arrived yet..)- but concomitant started releasing V2 of a few "old" lenses too, like the 12-35mm/F2.8II, 35-100mm/ 2.8II etc.

They hoped to upgrade to Dual IS 2 compatibility, but in the end it wasn't possible (hardware limitation). Support for Dual IS 1 is there, in any case, which should be enough for most use cases, Dual IS 2 really comes into play at FL greater than 280mm.But there are other benefits to the 12-35 and 35-100 version II like weather protection or step-less aperture.

I followed most Panasonic publications on their web-site, and couldn't find any confirmation of what you stated: "They hoped to upgrade to Dual IS 2 compatibility, but in the end it wasn't possible (hardware limitation)" I called Panasonic Au well before the G9, couldn't get any confirmation-or information at all-, so if you know any publications regarding this I would appreciate if you would share it with us.

Here is the latest compatibility charts. I will also try to find the original statement about the 12-35mm and 35-100mm mark 1s. The problem with those lenses was definitely a hardware problem. I use both of them on my GH5 and now G9 and have no issues with the IS. As stated above, dual IS 2 really only affects performance at the long end. (280mm upwards). http://av.jpn.support.panasonic.com/support/global/cs/dsc/connect/dual_is.html

sijou: no official paper, but at around the time of the release (AFAIK it was together with the GH5) I watched some videos and especially PhotoJoseph [*] had some first-hand information from a Panasonic official (Sean Robinson) even with live-interviewing and that's where they mentioned it.

@Roland Karlsson, Many (most?) people view photos on their phones. Many (most?) people view everything on their phones. Hi-Rez and 4K ... the masses don't gave a ratz-*zz. That's why ILCameras sales are in free-fall. Ninety-five % just don't care.

@cdembray. I know that most people do not care about pixel shift. But - if this is an article showing pixel shift pictures, then the images in the article are best judged by those that care about the extra quality pixel shift can give. Then saying that it looks good on his iPhone is either ignorant or arrogant or a joke.

How does stitching result in less noise? The pixel shift modes typically sample every pixel multiple times (e.g. twice with the greeb, once with the blue and red colour filters), so effectively stacking to reduce pixel level noise.

Stitching only gives you less noise if you first take 4 shots of each piece you want to stitch, then each of those 4 shots need to be run through a mean stack, THEN you run the resulting mean stack pieces into stitching software. Its 10 times more laborious than just putting your camera on a tripod and hitting the button and letting the camera work its magic like the G9 and Oly cams can do.

Stitching DO also get you lower noise. Stitching gives you more pixels, so when downscaling to the original number of pixels you get less noise. One big advantage with stitching is that you also increase the sharpness of the lens.

@karlsonnYou’re also ‘increasing the sharpness of the lens’ with this model. If you think about each capture as a discrete image, the lens only has to resolve to the pixel count of the sensor. You could achieve the same result by carefully moving the tripod, double sampling and stacking the resultant images but it would be a lot more work and unlikely to be anywhere near as accurate.

@Roland -- I mean proof that stitching is a practical way to produce error-free images under typical shooting conditions. I don't doubt that noise is reduced, but the shooting process is impractical for many (or maybe most) situations because it takes too long. I also mean proof in the form of actual images demonstrating that it does what is claimed.

You had a "that" in your post. And the previous posts were about stitching reducing noise. And - I assumed you answered my post, or one of the preceeding. But no. You obviously just were talking for yourself with your own (favorite?) topic.

It is not all that much more hassle. This multishot technique demands that you use a tripod. It will take 4 images (or more) and then combine them.

You could, instead use a longer focal length and take 4 pictures (or more) pointing in different directions and then use automatic stitching. You do not need a tripod for most pictures. Not more difficult. And - the movement problem is less. You can often simple fix that with some extra overlap. The auto stitching software can detect that.

@Roland -- I have done a lot of panorama stitching from shots taken with UWA and fisheye lenses, using a 2-axis VR rig on a tripod, such as with a 7mm rectilinear lens (12 shots around 360˚), or with a full-frame fisheye (8 shots around 360˚ plus up and down) defished and processed in PTGui. Overlap is a given. They usually require repairing of stitching errors which is time consuming, and they also suffer from distortion in conventional screen or print viewing mode. Most of my commercial architectural stills work is High Res with UWA rectilinear lenses. Are you saying it's possible to avoid stitching errors and distortion when making UWA images with stitching? If not, it's out of the question.

We are only talking about replacing pixel shift with stitching, nothing else. Not UWA images. You take maybe four or nine images with some overlap. Maybe using an 85 simulating a 50 mm. If nothing is nearby you can take the images without tripod. If things are nearby, a panorama head is nice.

@Roland -- I presume you are citing FF lens equivalents. On mFT that would be using a 42.5mm lens? Well that's far from the wide AOV that I need for most shots and I still don't see any reason to bother with it, when High Res does a great job (with some PP work). I will give it a try, although I think it will only work properly for WA architecture using a shift lens. Here's a relevant article - http://bayimages.net/blog/stitching-with-tilt-shift-lenses-to-create-high-resolution-images/

BTW today I tried HDR plus High Res: I shot bracketed High Res exposures with camera on a solid tripod and tethered to a computer running Olympus Capture to control the camera without touching it. After processing the Raw files in Olympus Viewer 3, then blending in HDR EFX Pro2, the resulting image is even sharper and more detailed than the individual HR shots! There are halo artifacts around some high contrast edges, however, which may require different exposure and/or processing settings to alleviate.

@Roland -- Please read the thread again. OP wrote: "This seems like a solution in search of a problem. Why not just stitch images, if you want higher resolution? Less hassle, no need for tripod." That is what I replied to. Subsequent posts brought up the issue of noise, but the discussion was not exclusively about noise.

@obsolescence - you are describing a very challenging, UWA kind of stitching which certainly is a hassle. But for more conventional shots taking five or six overlapping shots without a tripod (but maintaining constant focus, exposure etc.) and then merging them in Lightroom or Photoshop is extraordiinarily hassle-free. It does strike me as a very straightforward alternative to pixel shift. Of course in those cases where perspective or parallax could be an issue special care does have to be taken, but for general landscape or architectural shots, or anything with a standard or longer focal length, this doesn't happen too often, in my experience.

@timo -- I certainly will try it and see if it will work for me. "General architectural shots" are mostly UWA in my work. Let's do a hypothetical.

If I plan to shoot for stitching, say 6 images that will make up the equivalent of a 9mm lens on mFT (18mm equiv. FF), what would the appropriate FL be? ...I'm guessing 20mm (40mm equiv. FF).

Imagine a fairly tall and wide building shot from the street. I aim the camera upward to shoot 3 for the top row (left - center - right, overlapped); I set the camera level and shoot 3 more. The perspective of the shots aimed upward will be very different from the other ones, having quite extreme convergence in vertical lines versus none in the others. How will I reconcile that when trying to stitch?

I don't see how it's even possible to stitch the top row because the verticals will be at greatly varying angles side-to-side in each shot. How would the right side of one shot match with the left side of the next? I would expect awful stitching errors.

@Roland -- I am aware of how they "fix" the problem, but not without errors, smearing of detail, distortion, and other problems -- no comparison to a single high resolution shot taken with a highly corrected UWA lens. That is why these expensive lenses are prized by photographers.

I visited the Kolor Autopano Giga gallery, which is quite striking but doesn't offer high res samples, only downsized screen images. It features very wide panoramas with aspect ratios that are not representative of typical rectangular images that my clients want. It also doesn't indicate how much post processing work went into them. The closest one is “Bruxelles, by Thierry Guinet," but it's not very sharp. I have yet to see a stitched UWA image other than shift-stitched that can deliver clean, convergence-corrected, undistorted high res standard rectangular images of an architectural subject -- in the way I know this can be done efficiently with High Res capture. I'm certainly open to the potential (I will do some tests) and to discovering samples that anyone might want to link here.

@obsolescence. I have to give you one here. And take one step back. If you want perfect stitching for UWA pictures, then it is not easy with this program. Not at all. It is cheating and using very good blending to hide double contours. So - there might be distortion.

I have not tried to make perfect UWA stitched images. But, I have a notion why it fails. And the main reason (I think) is that it does not find the correct distortion parameters for the lens, nor the exact focal length.

You can calibrate the lens though. But I have not tried doing so.

But, there is another problem. The automatic stitching points are not in the exact positions so it has to approximate the stitching. And - I think this might be the main problem for finding distortion also.

OK. But - for the original problem. Taking a small amount of images to increase resolution. This works just fine.

Seems very niche and impractical at this time. More of a university style proof of concept.

I could imagine, with faster sensor readouts and pixel shift speeds in the coming 5+ years, that in bright light a shutter speed of 1/1000 could render multiple shots with an equivalent of, say, 1/125.

With superior AI processing this could be an incredible technology.

Right now... Who would actually use this in a meaningful way that actually delivers any photographic advantages of value?

It's a useful tool for macro, architecture, still life, etc. Medium format is normally the solution for shots like that when absolute detail is required. But this hi-res mode would produce something approaching MF in image quality.

For me, this is an answer to a question I was never asked. Most clients have no need.

The technology has been around since 2001. First from Jenoptik's Eyelike digital backs, then later from Hasselblad (The H5D-50c & 200c MS allow still- life studio photography at moiré free 50 or 200 Megapixel resolution. Four & six-shot technology ...)

Not sure if I could tell the difference when viewing an 80MP image on a 2k or even 4k monitor. I suppose if you need to print wall size images or bill boards of stationary only pictures on a tripod, it might be a good feature to have. Besides that, it's just another gimmick (useless feature) AFAIC.

My point as well. I am tired of reading about 80 MPs and 60 fps and ISO 51200, etc and all their limitations. I'm more interested in something like 20 MPs that can be used easily for common situations to give more quality like some extra DR, less noise,.etc.

Perhaps Panasonic or whoever can give us just the same 20 MP size image but with better quality after all the pixel shifting. I don't mean a simple resizing, but something that reassembles the image from all the shifted pixel info, by taking account of how the image was formed, for better quality. Few of us do posters and those who do it more than a couple of times a year can do much better with a larger format camera than working around all the limitations. These sample photos do not show any benefit in IQ.

@Sergey: Try Pentax; their Pixel Shift gives you the same 36 MP (K-1) or 24 MP (K-3ii / K-70 / K-P) as a single shot, but with full color information at each pixel like with a Sigma Foveon sensor, so there's no need to interpolate. (And, because they don't attempt to upsize, they only need to take four exposures instead of eight.)

I tested the Hi Res mode on the Olympus em1 mk2 with the 12-40 Pro lens against a Sigma Dp3Merrill.

The Sigma DESTROYED the Olympus.

I took a photo of a bookshelf in gloomy light from about 15 feet. The Olympus was good, but the Dp3Merrill picked up individual threads and dust on the glass front of the bookshelf that were just not visible with the Olympus. The Sigma had incredible fine detail of the texture on the book spines, which were just blur with the Olympus.

If you want to take high resolution shots with a small mirrorless camera and you are going to use a tripod, save time and money and try a Sigma Foveon camera.

You have to remember that Foveon captures 100% detail (RGB in every pixel) in a single, native res image. Bayer high res is still interpolating multiple images, each containing only 50% detail (RGGB square). Foveon is known to have detail well beyond its MP rating because of this.

You don't understand how the pixel shift tech works. Oly's and Panny's pixel shift accounts for the bayer pattern by taking 8 shots and shifting the image one pixel to get all the information AND to increase resolution.

Pentax has a similar tech, but they decided (rightfully IMO) to forgo the other 4 shots that increase resolution, and just do a 4-shot pixel shift to get 100% color info ala Foveon. The result is an APS-C sensor with better resolving power, detail and noise performance than most FF sensors.

It would be great if Sony puts this in their RX10 V or another 1 inch camera. Hopefully, the smaller sensor and a more conservative approach (4 shots only and less MPs) can actually produce more usable images with higher DR, colour depth, and not just very high resolution and huge files.

@SergeyI'll never really understand why Oly and Panny are set on using pixel shift for color info AND resolution. Just doing color info would make them both smoke all APS-C senors (save Pentax), and compete with some Full Frame sensors. Plus, it will be more feasible to increase the shooting envelope with just a 4-shot sensor shift instead of 8.

I have to say I'm pretty impressed. As with a lot of technology, this would have some limitations and may not be used that often, but when the situation is right and the need arises, it can be a terrific consideration.

I don't recall reading anything about noise but that was almost non-existent as well. As such, I think this has some real application for mobile devices, if only via add-on apps. (My guess is that Apple/Droid would not want them to be native to the phone because too many people wouldn't know how to use it correctly.)

I also note with interest that the firmware support for this type of imaging must be still in relative infancy. Furthermore this type of capture must benefit the relatively smaller sensors over larger sensors. If it can be mastered properly it should give the (say) 4/3 sensor a leg up in quality imaging that the larger sensors by a procedure that they may have trouble in following due to the initial larger files created and the huge level of processing necessary to process them into a file of almost unimaginable size.

Of course it could be reasonably said that a larger sensor does not need this process as much if at all. But all the 4/3 sensor has to do is match quality and then it becomes a very compelling alternative.

Pixel shifting for image resolution enhancement is different than pixel shifting for noise reduction. Except for noise and diffraction effects differences, a 20MP m43 sensor that does pixel shifting to obtain a higher resolution file is no different than a 20MP FX sensor that's doing the same thing assuming both taking the same FOV image (otherwise you can't make a direct comparison). Same amount of data to push around inside the camera, same relative resolution increase.

When Leica purchased Sinar they acquired Jenoptiks digital back technology. With the Sinarback eVolution 86H, 1- and 4-Shot captures can be taken with its 50 MPx sensor. https://sinar.swiss/products/digital-backs/#!/1 The Sinarback eXact, you can create multi-shot photography with 4 or 16 shots, allowing you to get resolution of up to 192 megapixels!

What small sensor camera can give you One Hundred Ninety Two Megapixel shots ???

Obviously this emerging technique opens new fields of artistic endeavour and in time will result in some excellent images. All is not necessarily only good because it is perfection. Interesting that early images often appeared devoid of life of had ghosts of presence due to long exposures - the more things change ....

Perhaps now a technique might emerge to allow images to “disappear the crowds”. Maybe one day tourist attractions can be made into just “unadorned tourist attractions”.

More about products in this article

It's time to vote for your favorite cameras and lenses in our year-end Readers' Choice Awards! For this batch, we've rounded up the seven of the best high-end interchangeable lens cameras for enthusiasts and pros. Vote for your favorite in our poll.

Even though we've had only a few days with the G9, we've already seen some improvements to the JPEG engine, and been able to play around with new features like high-res mode. Here are our first impressions of the camera.

Latest in-depth reviews

The Fujifilm X-H1 is a top-of-the-range 24MP mirrorless camera with in-body stabilization and the company's most advanced array of video capabilities. We've been shooting with one for a while now and have put together a gallery, a sample video and some preliminary analysis.

Panasonic's Lumix DC-GX9 is a rangefinder-style mirrorless camera that offers quite a few upgrades over its predecessor, with a lower price tag to boot. We've spent the weekend with the GX9 and have plenty of thoughts to share, along with an initial set of sample photos.

Panasonic's new premium compact boasts a 24-360mm equiv. F3.3-6.4 zoom lens, making it the longest reaching 1"-type pocket camera on the market. We spent a little time with it; read our first impressions.

The Panasonic GH5S is best understood as an even more video-centric variant of the GH5. We've tested it in a range of circumstances to see whether the video improvements are worth the loss of stabilization.

Latest buying guides

Landscape photography isn't as simple as just showing up in front of a beautiful view and taking a couple of pictures. Landscape shooters have a unique set of needs and requirements for their gear, and we've selected some of our favorites in this buying guide.

Quick. Unpredictable. Unwilling to sit still. Kids really are the ultimate test for a camera's autofocus system. We've compiled a short list of what we think are the best options for parents trying to keep up with young kids, and narrowed it down to one best all-rounder.

If you're a serious enthusiast or working pro, the very best digital cameras on the market will cost you at least $2000. That's a lot of money, but generally speaking these cameras offer the highest resolution, the best build quality and the most advanced video specs out there, as well as fast burst rates and top-notch autofocus.

Are you a speed freak? Hungry to photograph anything that goes zoom? Or perhaps you just want to get Sports Illustrated level shots of your child's soccer game. Keep reading to find out which cameras we think are best for sports and action shooting.

Ricoh has announced an updated version of its K-1 full-frame DSLR. The Pentax K-1 Mark II gains an additional 'accelerator' processor that enables improved image processing as well as a handheld version of its Pixel Shift Resolution mode.

For a limited time this summer, current K-1 owners will be able to send their cameras in for a circuit board replacement, essentially upgrading to a Mark II. They'll even get a Mark II logo swapped in on the front of the camera.

Panasonic has continued to develop its organic/CMOS image sensor tech, and the latest breakthrough is big: an image sensor that can shoot 8K at 60p, boasts incredible dynamic range, and has global shutter capability.

Services like Copypants and Pixsy help anybody find copyright infringers, send take-down requests, and quickly demand licensing fees and damages. But do these automated systems also open the door to prolific copyright trolls?

The new 5x4-inch field camera was designed by UK photographer and custom camera maker Steve Lloyd, and it promises to be lightweight, easy-to-use, unique, affordable and upgradable... as well as a bit funky.

Camera accessory manufacturer Really Right Stuff is relocating. The company is moving its headquarters from California to Utah, citing rising costs and promising 'expansion on every level' as a result of this move.

Fujifilm's new X-H1 sits above the X-T2 in the company's X-series APS-C lineup. At the X-H1's launch in LA last week, we sat down with the camera's product manager, Jun Watanabe, for a detailed look at the new camera.

The so-called 'Prosthetic Photographer' uses AI to continuously scan the environment for 'ideal' scenes. When it sees one, it uses electrodes to zap the photographer, forcing them to press the button and take the shot. It's an... interesting idea.

A helicopter pilot and his student claim a civilian drone was the cause of their crash landing last week. If their story is confirmed by an ongoing investigators, this incident would mark the first time that a drone has caused an aircraft crash in the US.

Lensrentals' Roger Cicala just tore down the Sony a7R III to see just how much Sony did (and didn't) improve the camera's weather sealing over its predecessor. The results are a "good news, bad news" deal.

Samsung just set a new solid state storage milestone with its new 30TB SSD, the Serial Attached SCSI PM1643. This monster was built for enterprise use, but we can't wait to see this tech trickle down to consumers.

On this week's episode of The New Screen Savers from the TWiT Network, DPReview Science Editor Rishi Sanyal talks with host Leo Laporte and co-host Megan Morrone about some of the newest tech trends in smartphone cameras.

A blockchain crypto-art rose based on a digital photograph by Kevin Abosch was just sold for the equivalent of $1,000,000 USD in cryptocurrency to 10 equal investors. If that last sentence made absolutely no sense to you, read on.

Swiss Olympic skier Lara Gut wiped out on a run last week, and slid straight into a group of photographers shooting the action from the sidelines. Getty photographer Sean Haffey kept on shooting as Gut slid towards (and eventually hit) him.

There was a time when Fujifilm mirrorless camera users may have felt the need to go to another system to shoot video. Thanks to a new camera and a couple of lenses, they suddenly have some sweet options.

The Rotolight Neo 2 is an LED light panel with the capability to fire its LEDs fast enough and bright enough to act as a strobe. Is it enough to make stills photographers re-think their old-fashioned speed lights? Read on and find out.

Sony has made something of a break-through in sensor development with a new backside-illuminated CMOS sensor that is capable of global shutter, a huge improvement over current CMOS global shutter technology.

Microsoft has released a new "Ultimate Performance" mode for Windows 10 Pro for Workstations—a mode that throws all power management out the window (so to speak) in favor of the best possible performance it can pull from your hardware.