Hello all:My usual workflow is to shoot RAW, and use either DXO or ACR to process the files. I then open in Photoshop (currently CS6), and do my adjustments (burn, dodge, Levels, rubber stamp, etc). When finished, and still at native resolution, and NO sharpening applied, I save it as a master. If I get a print order, my work flow has been: Rename the file such as "24"x32" sharpened file XXXX". I then apply PK Capture Sharpener. I then resize to the print size. Last I apply PK output sharpener. I recently had a print order for a 24"x32" image made from a Pentax 645D file. I printed it at 240 ppi. I somehow LOST the master file (DUH!!). No problem if all my prints were to be that size, but I just filled an order for a 17"x23". The image had already been sized and sharpend as a 24"x32". What I did was reduce the image size in PS to 17"x23" WITHOUT resampling down so the image was now 338.3 ppi. I printed it and it somehow does not seem to have the same "punch" as the larger print. Should I maybe have downsized to 240 ppi in PS by down sampling using the selection which says "best for reductions"?Any advise short of "Don't lose your master"?ThanksDave in NJ

What I did was reduce the image size in PS to 17"x23" WITHOUT resampling down so the image was now 338.3 ppi. I printed it and it somehow does not seem to have the same "punch" as the larger print. Should I maybe have downsized to 240 ppi in PS by down sampling using the selection which says "best for reductions"?

You should have rerun PKS after the resize without resampling because the sharpening is based on the actual PPI resolution. So, the difference between 240 and 338.3 had the impact on the sharpening as you've seen.

If you are interested in sharpening, take a look at this lesson by, probably, the world's greatest authority on the matter. His conclusion, briefly, is that most software has got it totally wrong (or worse).

He's a lengend in his own mind...I've seen a couple of his videos and he spends more time pontificating than actually teaching. But, by all means, make up your own mind. He's like an English Ken Rockwell.

His conclusion, briefly, is that most software has got it totally wrong (or worse).

OK...that's an hour of my life I'll never get back...but I had to see it for myself (kinda like slowing down to look as a car crash on the highway).

He doesn't really understand the genesis of unsharp masking–which was an analog method of taking an image and making a blurred mask version to be combined with the original to increase the edge contrast and the accutance. It started not with scanners but with process cameras to make sharper film separations...

He likes to wave his hands and tout his "magic" but he doesn't really seem to drill down on anything unless you pay your money and join his inner circle. The problem is, his 'magic' isn't really magic, it's simply techniques...and these techniques are pretty easy to discover for oneself. So, I won't be joining Guy's inner circle any time soon :~)

Guy doesn't understand how to use Camera Raw's sharpening controls that he dismisses. He dismisses the Unsharp Mask filter in Photoshop (even though the controls actually do a digital version of the analog USM very accurately). He thinks Camera Raw and Lightroom are "useless" for sharpening and color correction...hum, I wonder what Guy thinks of Thomas Knoll and Eric Chan...I have enormous respect for both of them, and not so much for Guy.

But hey, that's just me...I know a lot about sharpening...worked with Bruce Fraser to develop PhotoKit Sharpener...licensed our output sharpening for Lightroom and Camera Raw and consulted with Thomas (and Eric) on the ACR/LR capture sharpening.

Guy spends 80-90% of the hour long preso denigrating the tools and the engineers who have created Photoshop, Camera Raw and Lightroom. He spent a small amount of time actually talking about sharpening...but it was funny, I don't know how many people would catch it, but about 3/4 of the way through after the discussion of "Curves" and how he targets what he calls a "proportional mask" The layer blending options flashed on screen briefly before he grabbed it and moved it off screen...seems he didn't really want to show that. Know why? that's how you can target the various levels where a sharpening layer can be applied using the Blend If sliders...

So, let's see...he likes to use Unsharp Mask at 500 amount and 2 pixels...then me magically uses a proportional mask to apply the sharpening...Uh huh. Not particularly enlightening, is it? You have to pay to join the inner circle and learn the magic.

Are the fuzzy before images on the left in Guy's demo the product of Camera Raw's default sharpening? I don't know what he's comparing his results to. If the left shots are completely unsharpened, of course his work is draw-dropping good by comparison. But isn't the unsharp mask technology becoming outdated now that we have deconvolution-based sharpening tools?

Are the fuzzy before images on the left in Guy's demo the product of Camera Raw's default sharpening?

Actually, if you were paying attention, he's showing the before/after images in Aperture, not LR and he doesn't say whether on not any sharpening was applied by Aperture to the before images although based on the softness, I'm assuming he's turned off any sharpening so his after looks better. He only opened Camera Raw once to denigrate the controls. He seems to really, really hate Lightroom.

Just to be perfectly clear, Guy may know some stuff...I don't doubt that. The biggest problem I have with somebody like this is the constant harping on how BAD all the software is. He promotes himself by denigrating the the very tools he's using. And his Focus teasers are simply self-promotional tools to get you to pay to join and learn the real stuff–which since he's not a software developer is simply using Photoshop's own toolset that anybody can learn how to use. It's not like he's developed any special algorithms or plug-ins himself...

And of course our very own Jeff Schewe, and personally I doubt theres many know more about the subject of sharpening than Jeff.

Thanks for the kid words...but most of what I've learned I learned from Bruce and the work I've done after Bruce passed away. But yes, I will admit I do know a lot thanks to working with the likes of Thomas Knoll and Eric Chan on ACR/LR sharpening–which Guy thinks sucks. Funny, I bet I could teach Guy how to use ACR/LR's sharpening pretty well but I seriously doubt he would have any interest–and it would kill off a lot of his "magic".

If you are interested in sharpening, take a look at this lesson by, probably, the world's greatest authority on the matter.

Hi,

Well, that depends on who is calling whom an authority (for a blind person, a Cyclops seems to be a genius).

Quote

His conclusion, briefly, is that most software has got it totally wrong (or worse).

Actually, he is right, but for the wrong reasons. He does have a point about the almost(!) brain-dead controls we have to cope with in many applications.

Unfortunately in this time and age, not a single time did he mention deconvolution (which might be a good thing if he doesn't understand what that is about). It's a basic procedure to anybody even remotely familiar with the physical properties of digitized image data (from a CCD/CMOS sensor, or even from an analog scanning tube). I do understand where he's coming from, with a scanning operator's background, and from that limited angle of view, he is correct, modern controls (wang-bars) do suck.

However, anybody who is even slightly introduced to digital signal processing (DSP), and could be for free, should know that digitized image data offers a different toolset to reduce Capture deficiencies (residual lens aberrations and diffraction, and DOF blur), which will impact the entire(!) spatial frequency range of the Modulation Transfer Function (MTF). In addition to that (capture sharpening in a gamma adjusted, or not, colorspace), there are several methods to address output sharpening, and the better ones do address variable sharpening at different tonality or local contrast ranges.

For quite some time already, I've been an advocate for luminosity blend-if layers, to control potential clipping artifacts from poorly designed sharpening tools (see the dialog box below which only suggests a starting point for adjustment):

Proper deconvolution sharpening shouldn't even produce clipping (or noise amplification, if avoidable), but the current sharpening controls usually do not offer any control (or even really useful user guidance/feedback) to that (clipping) effect.

I'm not so sure he's really helping with sharpening, because he seems stuck in old fashion UnSharp Masking techniques. He does manage to use those limited tools effectively. But there are fundamentally better ways of getting there.

Quote

He also explains how most Raw converters have got it seriously wrong:

Again, he does have a point, but then he also doesn't seem to use ACR/Lightroom optimally. He is correct that the lack of indication of how much recovery has been applied in ACR/LR, doesn't help to use the controls to set to the best values (of which there are plenty to get wrong). There is also an under-the-hood exposure shift encoded which is different for various cameras. That all doesn't help to understand what's going on.

That's one of the reasons I prefer Capture One or RawTherapee for Raw processing, there are fewer hidden 'adjustments'. He is correct that Highlight recovery tends to kill highlight quality, but he seemingly advocates to overexpose and pull exposure to reduce clipping and improve the shadow detail quality, which is very risky without full control over how the overexposure is brought back into range (RawTherapee allows to do that, and it's better at Deconvolution Capture sharpening). Maybe Aperture also does that better, which could explain why he likes that better for those on a Mac.

His kind of snarky and critical attitude towards Adobe (which seems to be effective in upsetting some) is to be seen as a bit of posturing, to sell his personal views as gospel. The fact that he needs to do that is also a bit telling, but he also does make a few good points.

I read somewhere that Deconvolution sharpening (E.g., Focus Magic, Topaz InFocus) is best for "resolution" sharpness when used for capture or output sharpening, and Unsharp Mask sharpening (e.g., Photokit Sharpener, Nik Sharpener Pro) is best for "acutance" sharpness when used for creative sharpening. What say you?

I read somewhere that Deconvolution sharpening (Focus Magic, Topaz InFocus) is best for "resolution" sharpness when used for capture or output sharpening, and Unsharp Mask sharpening (Photokit Sharpener, Nik Sharpener Pro) is best for "acutance" sharpness when used for creative sharpening. What say you?

Hi,

That's essentially correct.

The Capture process is inherently blurry, because of optical imperfections, diffraction, and defocus. Then there is the influence of usually an Optical Low-Pass Filter (OLPF), and a somewhat square area sampling aperture of the sensel. There is also an effect from not sampling all the color-channels at each sensel position. Combined these produce a Gaussian type of blur that can be effectively improved by Deconvolution. This assumes there is little camera shake or vibration, which has a different blur signature, but can also be reduced by Deconvolution.

Output usually involves resampling the image data to match the printer driver's native resolution, or downsampling for display purposes. Resampling creates a certain level of blur, and that also can be effectively reduced by deconvolution.

In between these Capture and Output Deconvolution steps that actually increase resolution by removing the blur, one can target different things that involve the impression of sharpness, acutance, by boosting or reducing local contrast. But there are better tools available than simple USM based edge contrast boosters, even if they use blend-if layers and masks.

In a simple form one could think of Dodging and Burning, or High-Pass filtering which addresses specific spatial frequencies, but more efficient tools target multiple spatial frequency bands at the same time (e.g. Topaz Detail). There are also new developments in addressing Clarity at different ranges of contrast (e.g. Topaz Clarity), and all sorts of tonemapping tools for locally compressing or expanding tonality (e.g. ACR/LR process 2012, or even better Topaz Adjust which allows Adaptive Exposure adjustment throughout the image in an adjustable number of zones).

There is one type of (De)convolution (de)blurring that could be done in this Creative 'Sharpening' stage as well, and that involves the adjustment of DOF blur, but that is a complex procedure because the level of blur changes with distance, and the depth clues are to a large extent no longer available unless a depth map can be constructed.