On Assignment: Retouching, and the difference between amateurs and pros…

Maitres du Temps Chapter Three in white gold. (Larger version) There are panels at 6 and 12 that drop down into the dial and retract to uncover day/night and second time zone indicators; there’s a moonphase indicator at 4.30, date at 2 and small seconds at 8. Like all watches designed and made by famous independent ACHI members – this one is the offspring of Kari Voutilainen and Andreas Strehler – if you have to ask the price…

An image like this requires a surprising amount of work: I’ve already talked about the mechanics of lighting horological images in this three-part series (beginning here). To be honest, I originally intended to photograph the set up and other b-roll for another on-assignment post, but the simple reality is that I’m usually so busy on the shoot that I just don’t have the time. Instead, I’m going to talk about the amount of work that goes in behind the scenes.

There’s about half an hour of setup for the lighting gear for the first image, a few minutes of tweaking, and then you’re ready to go. But before that, there’s also about half an hour of cleaning and dusting for each watch; you want to remove as much dust, fingerprint oil etc. as possible to miminize retouching time. It’s especially important for any surfaces where this really shows, such as polished cases, antireflective coatings on crystals, and any case seams. An antistatic brush and blower completes the job. Even so, it’s nearly impossible to remove everything, so there will inevitably be cleanup required afterwards:

This is a direct screen capture of my 27″ Cinema Display (2560x1440px). A 100% version is here, so you’d see the same actual-pixels view I see. The red box in the navigator pane shows you how much of the image I see at one go: not a lot! (The image was shot with a D800E, the PCE 85/2.8 and three SB900s; this should also give you a good idea of the image quality the D800E is capable of under ideal conditions.) You’ll also notice that the 100% view of that capture looks a little rough; that’s because in order to ensure I get every single imperfection visible at 100%, even if they’re single-pixel size. It’s very important to do this because any breaks in texture are very obvious to the human eye; no matter how perfect and well made a watch is, there will inevitably be a dust particle or two. The reality is you have to retouch at higher magnifications for sufficiently precise control – even with the tablet. The view you’re seeing here is in fact 200% and retouching is nearly completed (there are a couple of spots left); this process takes at best 1.5-2 hours with a ‘clean’ watch, and up to a day with a dirty one, or one in which I have to repair manufacturing imperfections or traces of handling.

Often, what appears a little rough at web-size will resolve itself into real detail in a large print – you can take this out, and I frequently do for small versions, but I leave it in in full-resolution images. A good example is the texture on the minute hand: it’s in fact a reflection of the ‘S’ of the maker’s name off the inside of the crystal and back off the hand again, and the lettering is very readable. It looks awesomely good in a large print – and these kinds of things often go to wall size or greater*.

*Side note: the reason there’s so much empty space around the watch is because these images are also frequently used for ads, or double-page spreads, and we have to leave room for text and the gutter.

Bottom line: here’s the tough part of being a pro. Not only does it have to look aesthetically perfect, but it has to be technically perfect too, and consistently so. This is only one of 130 images I produced on that shoot alone; commercial rates for this kind of photography are not just high because it’s not easy to execute (try lighting a perfectly reflective object in such a way that you have directionality for texture, but also diffusion to prevent harsh reflections) – but also because shooting time is just the tip of the iceberg; a good rule of thumb is that on a watch shoot I budget three to four days of retouching for every day of shooting. So, who still wants to be a watch photographer? MT

_____________________________

Enter the 2013 Maybank Photo Awards here – there’s US$35,000 worth of prizes up for grabs, it’s open to all ASEAN residents, and I’m the head judge! Entries close 31 October 2013.

____________

Visit the Teaching Store to up your photographic game – including workshop and Photoshop Workflow videos and the customized Email School of Photography; or go mobile with the Photography Compendium for iPad. You can also get your gear from B&H and Amazon. Prices are the same as normal, however a small portion of your purchase value is referred back to me. Thanks!

Not your bad at all, Ming. Sign of the times that people have these bots unleashed thinking it’s all fair game. I sometimes feel like I want to go back to the preinternet days of ignorant and sheltered bliss…

What I can’t figure out is the purpose of some of these spam comments – the ads and shills are all fine and well (not really) but the pseudo-fake posts boggle the mind. They don’t direct traffic anywhere. And somebody has to bother to write them: to what end?

Hello Ming, are you able to tell me what rig Bill Cunningham uses for his street shots? Thanks, James Leahy.

From: Ming Thein | Photographer >To: unclejames50@yahoo.com >Sent: Monday, August 12, 2013 2:01 PM >Subject: [New post] On Assignment: Retouching, and the difference between amateurs and pros… > >Ming Thein posted: ” Maitres du Temps Chapter Three in white gold. (Larger version) There are panels at 6 and 12 that drop down into the dial and retract to uncover day/night and second time zone indicators; there’s a moonphase indicator at 4.30, date at 2 and small seconds” >

James> just Google “Bill Cunningham.” Search “images.” There are a ton of pictures of him camera in hand. In about 30seconds, on my iPhone, while watching a film (FLIGHT), I’m 90% sure I can name that tune. If you searched text results, maybe even googled your question verbatim, who knows what might happen! If you really wanted to know—you could get this answer. And you don’t need to be a walking Camerapedia, either. I’m certainly not, and I got there…

Sorry if I’m in Walt Kowalski mode tonight. It’s the preemptive “Thanks” that grates on these things 😦

I dunno MT, the “curl” corn puffs t-shirt [“karl” in Japanese phonetics; spelt カール in Japanese script] and matching Borsalino, was a lovely sartorial touch [one of your self-portraits on Flickr].

Though my eyes were firmly locked on the F2T and Noct-Nikkor in that photo 😛

Best fashion photographer ever: Richard Avedon. I know it’s cliche, but there it is. Richard Avedon.
[Terry Richardson, Bruce Weber, Mario Testino, Juergen Teller are perhaps more fashionable? My references may seem about five years out of date, because, I haven’t [had to be] bothered about it for about five years! Anyone remember this one. A Flickr “great shot” for that 🙂 ]

Oh, that one and the other odd random T-shirts I have are all from Uniqlo – they’re an endless source for slightly kooky stuff that also happens to fit well and be reasonably cheap.

If I shot fashion I suppose I’d make more of an effort because I’d have a higher awareness of what works and what doesn’t (though I’m not sure about Terry Richardson, to be honest) – much in the same way that I shoot watches and what’s on my wrist is quite telling…

Ah, the UNIQLO licensing machine! Everyone is at it. Remember the days when a t-shirt just memorialized a place or a band or something… Fast Retailing Co.,Ltd, Mr Yanai, are pretty good though; for all the “made in..” abuse they and others get, the patterning (the cut of the garment) is often done by the best in the business. I know for a fact that UNIQLO, when they got all super aggressive some years back, just went out and asked the best patterners in Japan to name their price, and sign deals. They in turn drag all the younger aspirational patterners with them.

Terry Richardson and Jurrgen Teller, etc., are completely lost on me. But I think there’s only one metric that matters: do they make money, lots of it? I’m sure they do. I think fashion photography can be interesting though: not as on the nose as straight product display, but then again, definitely all about selling you.

Oh yeah. I remember those! [and from before I was mixing up AA and Bayer filters and print sizes and screen res and WB and micro/macro and field curvature and noise and histograms and emulsion thickness and line pairs per millimeter and rangefinder base lengths and f mounts and m mounts and ED mounts and x mounts and telecentricity and double gauss and tessar and sonnetar and x-sync and pixel blur and diffraction limits and hyperfocal distances and tilts and shifts and sensor diagonals and downsizing alogorithms and a/d converters and exposure bracketing and focus bracketing and flash bracketing and brackets to hold speedlights and guidenumbers and umbrellas and highlight roll offs and tonal maps and color fidelity and bit depth and gamuts and vignetting and film winder actions and pc sync cords and commanders and polaroid backs and which brand of color film is the best…] 🙂

Actually I read an Internet page, probably a blog, somewhere on an English guy, forget his name, so stupid of me, but anyway he was a real pin-hole camera maestro: and all he needed was beer cans or other rubbish, with some photographic paper he’d turn it all into cameras. Just brilliant. He turned his mouth into a camera and got a mouth’s eye view of an unsuspecting dentist about to begin treatment. So so good. And I forget his name.

While I do care for the technical side of photography — and in my muddled way try and understand every last concept — I’m an aesthete at heart. I can be just as happy with that pin hole camera as with a Nikon D3: it’s just about if they give me what I want or not. Which loops it back to the tech stuff—I’m just trying to find the way to get what I want –> while not knowing what it is I want. The Sex Pistols said it better. But anyway: the pictures = everything.

Faruk hit on it so perfectly, in my view, the other day—if you don’t have a great personality/character/quality as a human (good or bad) you’ll never make a photo in excelsior. Neither gear, an aesthetic sense, or both, is enough…

Bill Cunningham uses a Nikon FE or Nikon D40 with 50mm f1.4 D series lens, sometimes with the rubber lens hood. Basically, keep it simple and concentrate on the shots. Also, most of his images are reproduced small, so he doesn’t have much need for megapixels. Another interesting aspect is that he manually focuses his shots. You don’t need much gear to create compelling images, though the trouble newer photographers have is that they are expected to have cameras that are newer and larger. 😉

Great article, Ming. I’m glad that there are true professionals like you who have the will and ability to present these amazing machines in a manner that is in keeping with their position in the watch world. I think precious few people really realize the difficulty of doing truly fine watchmaking work. Just as cleaning the watch for a shoot is tedious and time-consuming (not to mention the retouching afterwards!), so too is preparing the watch in the first place. Handling and assembling the countless finished parts, dial, hands, indicators, etc. without leaving a mark is (in my opinion) the most difficult part of the job, and it’s this that the client sees first and most often. I’m guess that, at this point, you have a really good idea of who does it right in the industry and who just gets by. If you’re anything like me, you may have been surprised at times to see which companies fall into one or the other of those categories…

Thanks Judd. There are a few of us who can do the technical part, and fewer still who actually understand what they’re shooting – I was a watch enthusiast first before I was a photographer, so I get just as excited about a new escapement as the creator. If you don’t, how are you going to capture something that passes that on to the end customer?

Completely agreed on handling parts: there are some real surprises here…especially at the magnifications I work and retouch at.

I understand you perfectly. When I’m going to photograph my daughter I wash her face, clean her nose (a must with children) and I try to control her hair to not have it over her face. Otherwise you lose a lot of time in postprocessing, specially when you have only Lightroom 4. Greetings.

Oh lord no. Three Hellions, eight, six and four. I’m not allowed to get too involved in any project that requires more than an hour of computer time at a stretch. Check back in 5 years when I may have time to go back and get my BFA. Even then, I’m likely to prefer shooting things that are being lit by large balls of superheated plasma 😉

In Boston, MA. where I’m from, I once charged a high end watch store $ 1000. for a shot of a Piaget Polo watch. $ 500. for the shoot ( 2 hrs. ) and $ 500. for the retouching. They finally paid but thought my fee was outrageous. They went to someone else on the next shoot and I could tell. Good enough for them is always good enough! There are too few visually literate people out there who are willing to pay for quality!

Honestly, Ming, it’s very impressive but doesn’t sound like much fun. I always imagine the life of a pro photographer to be fun and creative. This is the other side. Keep up the amazing creative – and technical – work!

It may seem beyond mundane, but I’d really like to know a bit more about cleaning small objects for closeup photography.

You mention the use an an anti-static brush and blower, but are there other techniques you might share? The question may seem absurd, but now that I’m trying to photograph some of my wife’s jewelry (she’s hooked on weaving with beads) it turns out we live in an environment largely made up of nits, grits, smears, fingerprints and…yes…dandruff.

Washing (if you can) and drying with a lint-free cloth is also quite useful. And then there’s the cleaning putty used on computer keyboards etc – sometimes this can leave residue on porous surfaces, though – so use with caution.

I must admit, the retouching at >100% doesn’t exactly make sense to me (I mean there’s something here I’m still too thick for, not that skilled artisans who do this for a living don’t know what they are doing). So here it is: as soon as we go past 100% (actual screen pixel is actual image pixel) how are we to know which pixel was dust and which was a math artifact introduced by the scaling algorithm? Or even, how do we know the difference between a detail we want and a detail we don’t at that level?

I definitely agree with some of your previous thoughts on photography of more expensive products, Ming. If we (in advertising: planners, ADs, photogs, clients, etc) get it perfectly right, the viewer is left with the lingering notion that they’ve never seen that in real life; too much reality (dust, dirt, manufacturing imperfections), and who’s going to be lured in by that?
Neither are optimal so we make the best choice we can, and do the latter.
[I like the word “verisimilitude” as a guide]

Looking over those great watch lighting articles again, it’s interesting to note: vintage watches don’t get, or need, the retouching treatment.
[and to note how great the GRIII is. They’re almost giving them away now the GRV is out. Someone pour a bucket of cold water over me, quick!]

The easy answer Tom, is that we can zoom in and out to judge the element, but the edits are done at our zoom preference level. If in there is any doubt for me, or if an element seems distracting, then I will remove it. The “was it dust or a bird in the sky” becomes more important when the output sizes are considered: if reproduction size is small, then the little spec may be a distraction, so it gets removed, even if in larger printing sizes it is clearly a bird. 😉

Easy: pixels become visibly big blocks. It’s actually quite obvious – try it. (Admittedly, a lot is experience, too – we will toggle in and out of that zoom view to make sure what we’re doing looks right at smaller reproduction sizes. The main reason is that at 100% you simply don’t have enough precision to shift the individual pixels around.

Vintage watches don’t get retouched because the patina is part of the appeal. That, and it would take forever and be extremely misleading. That said, if a watch look good unretouched, then that really says something about design and choice of materials…

Just to be sure though, are we talking about zooming in past 100% on the already up scaled image [now at 100% one screen pixel is definitely not one original image pixel]; or zooming in past 100% on the native image [capture resolution] at 100% it’s 1:1 but past that same again: one screen pixel is not one image pixel, except the scaling done may be more arbitrary than if we made it a “lock in” by first resizing to the output resolution.

Tokyo trains very crowded this morning: there are at least five salarymen jammed up against me now very confused by this one too!
[This last line was probably just too meta for my audience 😮 sorry fellas 🙂 ]

Output dimensions are for printing. Whether it is 72pixels or 300pixels per given output size, if you keep the total pixels of the image constant, then you are basically viewing the original capture pixels at 100% (Command 1 on Mac OS X). Scale 300 to 72, or 72 to 300, your file size remains the same, though the printing dimensions change, but more importantly 100% viewing on the monitor remains the same. So your Nikon D3 files at 4256 x 2832 pixels can have various printed output dimensions, but 100% is 100% when viewing on the monitor.

Mmm. The last line clears up my confusion, I think. But I must admit, on the other hand, I feel like I’m further down the rabbit hole now [it’s good!].

I’d got it into my head that the camera pixels [not the physical sensels, so I’m really just talking about data place holders in the file the camera hands to the computer; and I’ll forget about Bayer demosaic for here] were remaining the same size because though they are just data and have no extent themselves, the size of a screen pixel on which they are reproduced and you retouch is physically fixed and cannot change in any of this. Whenever we deviate from 100%, 1 screen pixel = 1 actually captured data point, then the computer screen is scaling the data –> introducing artefacts. I think I’m OK so far…
For views less than 100%, the computer is selectively removing data to fit to screen/window [and when I first tried PS after all I’d ever known was iPhoto, the “jaggies” — aliasing? straight diagonals getting the saw tooth treatment — at odd numbered mags: 33.3%, etc., REALLY confused and alarmed me => so thank you Gordon, on behalf of all beginners, for mentioning that previously] anyway, a zoom out introduces mathematic artefacts: some bad, some good. Edges may get blurred — also confused me at the start: as you step back from an image, it should look sharper to you; but that’s stepping back from an image as it is; this is staying put and making a smaller image –> definitely not the same difference is it! — and the crazy thing about that is if you start with a blurry full size, you may well get a sharper downsized image, and vice-versa. Depends on the algo used, but the analogy I keep in my head [I can, in theory, do and understand the algos maths bit, but my academic skills have rusted greatly and my brain now mostly works like a cross between a pub trivia quiz and that sprawling thing Robert Vaughn made in the end of SUPERMAN III] the analogy I keep in my head is that of the “unsharp mask.” Blurring first to get better sharpness afterward. The good effects of downsizing relate to noise –> not a hard and fast rule [because it depends on what’s in front of you], but for other beginners: take a noisy picture, try halving each linear x,y dimension — to get a quarter sized file — you should see that your noise has become not so visible [so I never touch sharpening and noise reduction tools until I’ve downsized a file; don’t let LR etc do the “25” sharpening thing to your RAWs!]. All this is where MT is coming from on the pixel binning mentioned in the perfect camera articles, etc. Anyway…
For views more than 100%, now, we really do have a different, but still related, problem—the software must invent data that was never actually there because the screen pixels can never change their physical extent. I’m supremely confident in our 21st century machine language and information technology; but I’m also savvy that it isn’t perfect either. [It’s the same thing with switching out colorspaces.] At more than 100% views, it’s the edge and tonal gradient details I was wondering about –> I guess this is ultimately guesswork [but very well guessed at]. Because, by definition, we’re looking at and often working on data that was never there in the first instance! So all the way down the line, from the coder who wrote the algo to deal with it, to the image on your screen: guesswork. Highly informed, 99% of the time, I’m quite sure.

To do a good job, you need an idea of what the thing looks like at the print and viewing size: this is what you want to correct for? The final output data you make is a kind of new 100%. Intended to be printed at that certain size and resolution, and viewed from so many meters. I’m assuming, when it all goes to plan, clients don’t print larger or smaller than they said they would, as as we’ve just seen, either process introduces effects we don’t want and have just spent hours correcting for.

So Gordon, you mention 72ppi and 300ppi. But there were only so many capture “pixels” [real data points] to begin with. 72ppi is going to spread them over a wider area, but we might be careful about viewing distance; 300ppi less so, and we’d encourage leaning in to appreciate the fine detail. But I guess my confusion was about the mushy middle –> images that are blown up very large but still viewed at close distances: the size of the 72ppi with the “leaning in” of the 300ppi. This is what the retouching addresses? But it seems as though if you are correcting for perspectives that go greater than 100% on your screen [the size of your screen pixels are smaller than the size of the dots the viewer will see in the finished work, so you need to zoom in to check the pixels], then you are correcting some data points that are real, some that aren’t—and it must be difficult to tell between the two?

I think I need a sit down.

But don’t worry fellas, I have the basic premise in my head now—thanks a lot for telling me m(. .)m

I do not retouch for my profession, but when I do … I find that the greater-than-100% view affords me more precision with the placement of my cursor and I can use a smaller brush size. That’s because my fingers and now aging eyes do not have 72 DPI, much less 300, positioning precision! In a row of 2560 pixels on a screen that’s 23.5 inches across (for a 27-inch diagonal), each pixel is less than 1/4 of a millimeter wide. The 100% view is still pretty zoomed out for the high-res displays we have today. For a more extreme case, imagine trying to do pixel-level editing on a Retina-display iDevice with just your fingers.

In LR, and I would imagine most editing programs, the pixels seem to be just copied for >100% views, so it’s easy to figure out the spot you’re trying to fix, and there doesn’t seem to be any false data creation that may deceive the user. Ah, and I see Ming has already said the same thing below.

Also, the LR processing pipeline is structured so that the various operations are all done in the right order, no matter the order that you applied them before output. The 25% default sharpening thing is capture sharpening to compensate for the AA filters in front of most camera sensors. There is a separate output sharpening when you output to a file or to print. That should be (and is) done as the final calculation before output in the LR pipeline.

Thanks Andre, I think the penny has finally dropped. I’ve understood the thing about zooming in in LR now: it’s always actual pixels –> LR is painting the same value into neighboring screen pixels when you zoom in. So no invention there. But plenty of blockiness! Got it. I’m sure where I was mixing myself up was with regards the final print and its pixels/dots vis-a-vis what we’re doing in LR/PS, etc. And whether we needed to invent information to translate between them: I’m working from the assumption that at typical viewing distances you wouldn’t want a huge print looking like your LR screen does at 8:1, etc! I guess this becomes a possibility when there’s a disconnect between native pixels, human vision and output dpi –> the viewing distance could effectively do just that: look like LR at 8:1. Like an iPhone snap displayed on a football stadium screen: in order to fill the frame, it’s either add the information between the gaps, or have blank dots/pixels there [white or black depending on the medium: black in the stadium case I guess]. I suppose with the wealth of modern resolution at their disposal and things like stitching, what I was thinking of isn’t so much of an issue for the pros, but still…

Let’s say I take a picture for Client A with my D3; an uncropped image gives me 4256 x 2832 pixels to start with, as Gordon says. My client wants to put the finished picture in a store window. It must be cropped square for the window, and fit 3m^2 (three meters square). Passers by on the pavement outside Client A’s store will see the print from about 1 to 2 meters away; let’s be linear and call it 1.5m. I’ve taken the picture very well and almost filled one of the sensor dimensions with the necessary image, with the right amount of blank space left for cropping in the other dimension. Let’s say I can output a square of 2700 x 2700 native pixels, after I drop my raw image into PS/LR and crop it.

So 2700px^2 are going to be put into 3m^2, and the image is going to be viewed from about 1.5m away.

Next I suppose we could use the circle of confusion: I always remember it as 1 minute of arc (one 60th of a degree). A very rough and dirty back-of-the-envelope calc now, and I make that something like 0.4mm as the threshold under which smaller dots are imperceptible to a pedestrian at 1.5m. She’d be a very sharp eyed pedestrian too, most wouldn’t be that eagle eyed; not to mention the fact that no-one is really looking intently at the stop window image and are moving along as they do it anyway! 0.4mm feels like total overkill to me; so I’ll double it. What do I know. I’m sure the pros will be in here in a minute to pour water over the whole thing, but for now I’ll say 0.8mm is going to be the “grain size” for my print.

The smallest detail I have in my file is one pixel –> I’ll translate this to one 0.8mm dot for the print –> (2700 x 0.8)/1000 => I get a 2.16m square print…

I think where this calculation goes wrong is that I haven’t considered how the print should fill the pedestrian’s field of vision. A 3m square viewed from 1.5m away –> we could get into solid angle of human vision and etc., but my coffee break is over and unfortunately I can’t get into it. Just on gut feeling: a 3m square from 1.5m away would feel prettay prettay big and I doubt I could take it in in one eyeful?…

It’s been interesting. I tried a few edits at 100%+ over my lunch: would take some getting used to for me, I don’t need to be that bothered—and I don’t need to be more than 100% zoomed in to get anything I need to get. I’m still in my prime Andre you oldster! 🙂

P/S on the LR sharpening. Yes, you’re right. Sorry I was mixing up LR and PS there and being incoherent. But the problem with LR sharpening is that on export sharpening — the sharpening accessed via the Export dialog or what you’ve saved to a quick preset for “Export with preset” — the problem with that is, you can’t see what LR has done until the file’s been written and saved to where it’s to be written and saved to. And you have only three layers of imprecise control: “light, medium, heavy”… This is one of the major strikes against LR in my book [for what it costs though: I certainly give it its due].
I like PS. Each step is under my control completely –> and I can see and interrogate the results before I save that file. In PS I do all about 80% of my adjustments in ACR — use it just like LR — then open up in PS, do more intricate stuff if it’s necessary [like multiple pass curves or brushes] though I only do that on rare occasion… super local and intensive PP that takes longer than 5 minutes is a chore to me and I quickly start to resent and fall out with an image that demands it… so ACR [No Sharpening / No NR!] –> open in PS –> do any fine adjustments => image bit depth to 8 –> resize image –> convert colorspace to sRGB [all my images are for the screen] –> Zoom into 100%; inspect –> NR if necessary –> Sharpen –> save as 10-12 jpeg depending on my mood and the file sizes.
I do wonder, though, about that LR pipeline… The default is 25 sharpening on RAWs, yes you can see the effect of that on your screen at your image’s native size. But no you can’t see what that looks like, in software, when you’ve downsized [since downsizing is part of the export command, right? That’s final output workflow, it’s a wrap at this point!]. And it follows, no you can’t see what it looks like after downsizing and then doing another round of out sharpening to some uncounted degree [we only know “low/med/hi”]. I like PS because I can sharpen, or not, the native size image, downsize it and see what I get. Then sharpen again, if it needs it, or not. I’ve found the best result is no sharpening at native size, downsize file, do an “analog clarity” pass with the regular unsharp mask, then a smart sharpen with a fine radius and frugal % [never more than 130%, never bigger than 0.8px]. And this is OK for me.

Most my shots are out of focus or shaky though—so it’s all for nought. Plus the big one: who cares!
[I do!]

I do a much simpler quick and dirty: if it looks good on a reasonably high-density LCD at the intended repro size and the intended viewing distance (either the whole image, or a portion therof) then it’ll almost certainly be fine in a print.

Tom, it seems you are getting too technical with this. Some printing experience would help visualize this, but a couple examples may help. Many major cities still have billboards, or they have buses and trolleys with advertising on them. Step up close and those large examples do look “blocky”. The constraint is the printing machines that output these very large graphics and images. In order to output enough prints for enough clients each day, the file sizes and output resolution are constrained. There are two reasons this works. One is that people do not get that close to most advertising prints, especially billboards. The second goes back to painting, which when you think about it is very low “resolution”. The mind’s eye of the viewer will fill in missing information, especially if it is an image with a person in it. So what may seem to be “blocky” up close, and be very effective at a proper viewing distance.

About the only exception I can think of offhand is interior store graphics and trade show graphics. When I have output these types of files, the demands are usually higher. So there is more pressure to have more pixels to make the images smoother appearing. (Fine art would be another exception, though that is more often a cost-no-objection endeavor). Clean edges in images are a very important part of that. Just to jump to a possible question, yes, a D3 could be upscaled and edited to produce very impressive large images.

Gordon, thanks for your patience with me. In a sleep-deprived huff last night, I was quite cheeky with James [Leahy] down below for asking questions; on waking this morning, I see that, really, I’m not much better myself. In fact, just the same. Though I’d hope I at least do it with a bit more panache. And don’t say thank you before the goods have even been given up. [meow!]

Yes, I agree, I’m probably being too technical. And not even technically correct, at that! As I say, if you ever saw SUPERMAN III, that sprawling tin foil computer that baddy Robert Vaughn made: that’s like the inside of my head. I’ll just add one more thought to why I got into a spin like that in a minute. But I’m sure I’ve got a better picture now [ 🙂 ] and understand a little more the need for zooming in so deep to an image. I also understand a bit more about the way the software actually represents these views too, thanks again Gordon, Ming and Andre:

1) Multiples of 100%, on screen, is all capture pixels, no interpolating.
2) When you’re >100%, but still a multiple of [well technically any % can be a multiple of 100% if we let fractions in, but I’m trying to stay away from the technical stuff. I’m confident I got what you meant] then the software is just painting the value it had for one screen pixel when drawn at 1:1, into surrounding screen pixels when the picture is drawn at x:1, where x is a whole integer >1
3) So we get the “block” sized pixels

Now, why I get in a tailspin. I hear you guys — who do this for a living, there’s no higher authority to me — talking about dust not looking so good when the photo gets printed big for end use => so you retouch. I hear this and compare it with my own experience [to try and relate –> to try and learn something]. When I search my own experience, I know that I have pictures where there was some dust or something on a shiny surface: but I never knew it was there until I zoomed into 100%, and even then, you had to squint a bit.

So the next thought is naturally –> well, spots of dust on the object I photographed are not a bother to me at 100%, wouldn’t the end print be scaled in the same way — it might be physically bigger than the 100% view I’m looking at now, since the print dots might be bigger than the screen dots, but still, in the translation, one screen dot will be mapped to one print dot — so scaled to be bigger in terms of physical dimensions, but it all comes out in the mix when the viewing distance for the end print is taken into consideration. That’s to say: me looking at the image at 100% on the screen; a punter looking at the finished print from the typical viewing distance on the street, in the magazine, etc. etc., gives more or less the same view => if it doesn’t bother me on screen, it shouldn’t bother the end viewer in real life.

I think this is related to the point MT makes with his on screen ruler check.

But anyway, that’s how I thought it’d be. Predicated on not finding much objectionable at 100%. Which I rarely do. And this is where I’ve done you all a disservice and wasted your time perhaps: I don’t photograph watches or products for a living!
I suppose I can relate to what Michael [Matthews] said. Being a complete nut about my cameras [which it must sound like I accumulate at a rate of knots; and I do, but it ain’t easy, and I want the gallery to know that I work for them, really hard—that’s why I’m so besotted when I get them, and in turn want more] but I love the little things, and owning a speed light and having access to another at work, I started to feel the need to take pornographic pictures of them /*creepy face. Still working on the glamor shots; but now I get it. Yes, wow, there’s a lot of dust, etc., on these objects when you get close and photograph them. Honestly, for amateur camera porn I just soften up the lighting and avoid the raking beams, that’s solves a fair bit of the dust OK [yes, I know this a non-starter for professional work where the brief, the client and the product itself dictate lighting, not convenience].
But the main point, the bad logic, and I do apologize: zooming in past 100% hasn’t been something I’ve ever felt or seen the need for in my own shoddy pictures. I am the center of the universe so my experience counts for everything. Why would you go past 100% if that’s, more or less, the closest an end viewer is ever going to experience the print from? And it looks OK on screen?

It doesn’t look OK at 100%, on screen — and sometimes even though it does, people may view it a tadge closer, so getting rid of the grime, even if it’s tiny, is better professional practice, screen pixels are so small, you’d have trouble doing that at just 1:1 view, so zoom way in to fix the individual pixels at hand — this is the simple message it’s taken me this long to get.

I know, I know
m(. .)m

Unless, and this is another thing that got mixed up with the above, unless the end viewer is going to be seeing it at something more like 200% –> you wouldn’t want to show them an actual pixels scale up, like we just talked about our screens doing in LR when we zoom to 2:1, etc… So upscaling, inventing pixels, and normalizing — “renormalizing,” i.e., the upscaled image becomes the new 100% — our software has filled in the gaps now. This’s where all my “invented pixels” stuff came from.
Now we mention it, this is a version of what the Bayer demosaic is doing isn’t it Gordon. As an ex-engineer and guy in charge of some complex radiation instruments at one time, I was actually very impressed with Bryce Bayer’s patent — his solution for his problem — when I picked cameras up and began to learn about their innards. Technology and higher end consumer taste is moving on now, I think, but the old Bayer matrix, as an intellectual product, is still a gem. Same with Anti-Alias filters.
The first camera I bought [in spirit my first] was a Sigma DP1M. I researched and researched and researched; but the decision was obviously complete folly based on nothing more than wanting to be cool and different. But what had drawn me toward the Merrill was all the kerfuffle about “no AA filter” and “razor sharp!” etc. It sounded good at the time [to a layman]. It was a happy accident, because it turns out that I happen to enjoy the way Foveons render and the way the camera looks and feels. I liked it so much I got its sibling, too. But a rank beginner, still not really sure what an AA filter even was, I went out and bought a DP1M—on the back of “no AA filter!” hype.
But AA filters are great. They’re useful. And they can actually add something to photographs, not take away. Softness is sometimes very desirable. Often helpful. Even indispensable. I’m sure you know much better than me, but there are cases where moire is a killer, aren’t there. I took some product photos of clothing a month or two back, as a favor for a department at work. Did that and had the classic clash with reality moment when fabrics I was shooting with a D7000 were all moire’d up; I was in a tiny room with no space to move back and change the frequency of details hitting the sensor; the subject was pretty much filling my frame [on a 50mm lens!] as it was, so moving further in was not an option, either. I tried to jiggle the tiniest bits further back as I could. Still moire. In the end I had to take the nuclear option and defocus the lens ever so slightly to lose it. The end images were shrunk down to 680px squares for a web shop, so this defocus was completely imperceptible in the final image. But, it goes to show, a stronger AA filter might have been better; the AA filter in the D7000 was about as close to the Nyquist limit as you’d dare. Great for 99% of stuff, not fun for the other 1%, hate to think about when that 1% was something that mattered.
I am confused — is this my motto or what! — though, because, again, half the reason I went for a DP1M in the first place was internet forum people saying no AA filter was just what you want for fabrics. As a matter of fact, I have done this test and have never gotten moire with any and every fabric I’ve tried with the DP1M and DP2M—mostly the DP2M, including with the close up lens, the AML-2, on it. Plenty of “screen moire” though [the stuff that disappears when you zoom in]. So hmm, not sure about that one really. It’s not really relevant, in the end, because the color reproduction isn’t trustworthy enough [consistent, noise free] or the post workflow realistic enough for something like product photography. Though now I’ve said that, there are probably legions of product photographers using Merrill cameras out there. It’d be just my luck!

OK, well, I think I’ve put my back into it enough. I have a polaroid back for my SQ with me today –> I need to hot tail it to the camera shops and get some film in: and have a play!

No AA filter and no Bayer filter are two completely different things. Not having a Bayer filter (and thus not requiring the interpolation that causes fabric moire) is the bit you want, not necessarily no AA filter. No Bayer filter also basically implies no AA filter too…the AA filter is required to combat moire precisely because of the interpolation.

Hi Tom. Hopefully you don’t take this the wrong way, but it appears that you are highlighting the “difference between amateurs and pros”. 😉

Of course “good enough” is often exactly that, but then again I am a professional, and I am a bit of a perfectionist. Clients like my attention to detail. Imagine photographing a woman without make-up, or with shoddy lighting … it simply would not work. It’s a bit like audio mastering technicians molding sound files to the best possible output, and then the average individual listening to an MP3 version through cheap ear-buds. 😀

There are some interesting aspects of sensors, and in the past I read a few White Papers to try to understand them more. Don’t forget the dead area between each pixel. The other factor in this is the depth of each well (pixel), which gets interesting in some recent changes to chip design to make that more shallow. You can think of white as full charge, and black as no charge; that brings up some other issues with details in shadows. Ming and I mention diffraction often, and the physical size of pixels on a chip affects that. Don’t forget that Nyquist is a theory that makes assumptions in order to predict results; I’ve seen some examples when drum scanning concerning aperture size variations and oversampling that went beyond what Nyquist predicts … but anyway it’s not really something most people could pick out in the end result prints. Yes, the technology is interesting, but it is a means to an end.

I’ll add in a bit here. So what is so good about the D3? You can stop down the lenses quite a bit before diffraction appears. It’s an issue that comes up when using the Nikon V1, since that has very small pixels, so I use that camera much more wide open. I read an article recently about hand-held camera movement and the affects on capture, though in reality this was an issue with hand-held film cameras too. A tripod and strobe set-up goes some ways towards getting the best out of gear, but we don’t always want to capture images that way. I suppose if there is a point in that, enjoy the gear within it’s abilities, but realize that when you are shooting hand-held, then you are not getting the maximum out of any camera, but that’s quite okay most of the time.

Not at all Gordon. It’s quite reassuring in a way to hear your customers appreciate your attention to detail and would be turned off by mine. Especially after hearing some of the horror stories Ming has.
I’m a hypocrite really, because I can relate: believe it or not, I do write for a living. Yes, really! It’s short, one word stuff; I get a line if I’m lucky, and otherwise edits and rewrites of pages and pages long instruction manuals, insurance T&Cs, etc., things as stimulating as North Korean interior decoration. But I am paid to do it and so, probably hard to envisage, am not as cavalier as I am here [here –> no checks, no edits: no time]. In fact I’m scrupulous. This is one of two things my Boss and his clients care about:

1) Deadlines always met, always
2) Error free

That’s all. I make sure no matter what no excuses EVER, that these two rules are never broken on a job I’ve said “yes” to. That’s how I’ve managed to stay in a line of work I’m not trained for or proficient at, and draw a decent wage to boot!
[It used to trouble me that the artistic merit of the copy — if it was a line for an ad, say — wasn’t feature, nevermind #1, there. It still does, but I’ve learnt to get over myself.]

If someone hands me a document and asks me to proof read the body only, for example, I still check every single English word and numeral on the pages I’ve been asked to check: phone numbers, addresses, everything. If it’s in English, I’m checking it. That’s not just attention to detail, it’s also about reputation. If the document were to leave my hands, and the English copy on my pages that I *wasn’t* requested to check was found incorrect after being printed, people’s eyes would invariably flash in my direction. It’s unfair, but that’s life—and I lived this lesson a few times right at the very beginning of my career. There is also the positive side of it, the reactions when you say “you didn’t ask me to check this, but that that and that are wrong. Here are the corrections, if you’d like them.” Anything passing through my desk, in English, I consider my responsibility; there’s not often time to take responsibility for everything and in the more severe cases I just refuse the job—which frustrates some customers because I’m refusing on grounds of work that wasn’t requested in the first place. But I’ve managed to garner enough of a rep now that I can be this daring.

I imagine it’s the same with your pixels. Every single one of them is your responsibility. You get them as close to perfect as is within your gift, rather than sail your reputation close to the wind and put out mezzo-mezzo stuff.

In my case, I really could put out middling copy and only about 20% of my customers would care or notice; but they also happen to be the 20% that pay 80% of my take: the Pareto principle in full effect. But that it should be like this is mostly luck. A lot get by fine with rickety rackety stuff. Some get found out; some don’t. I don’t wait for the judgement of other people, I judge myself. And I don’t get professional satisfaction from knowingly half-hearted work [not the same as “I do my very best for absolutely everything”]. I think it’s three way really: smart clients and smart end users dictate the threshold. But, yeah, as a mafia don once said, putting the hit on a respected lieutenant: why take the chance?

Yeah, diffraction limits on the D3 are pretty good, I go to f/16 and have no problems [but I’m not exactly the gold standard 🙂 ]; and unless I’m mistaken, you can stop a lens on a D3 down more than on a D800E, as the deciding factor is the physical size of the sensels. It was interesting to read what happens when you do that test with a D800 and D800E, too. In comparison to smaller sensors, sure, I’ve never really used my 4/3 camera stopped any more down than f/11, with f/8 the tighter limit; but as we discussed on the other thread, if the reason for stopping down was to introduce more depth of field, the properties of the smaller sensor give us more DOF for the same effective field of view. I find it’s about one and a third to two stops more, e.g., what I’d shoot at f5.6 with a 50mm on an FX camera, I’d do at 2.8 or 3.5 or something on my 4/3 camera if I wanted a similar render. So in practice I’ve found it all comes out in the wash, more or less, though it’s psychologically hard to consider half of the clicks on your aperture ring are pointless 🙂 Well not pointless but you know what I mean: not going to see heavy use… I haven’t tried f/22 on my Bronica yet, but I plan to in the coming days—be interesting to see what happens, when the negs come back, oh no! I have a rockin’ new Polaroid back for it, so I can just go ahead and test today 😛
[except the Fuji NP stuff costs 2,080 for ten!! Ouch 😮 ]

Your comment on the LR processing pipeline basically makes things both idiot-proof and limited: certain tonal effects that can only be obtained by sequential manipulation of the data (dodge and burn first then curves as opposed to the reverse etc) become impossible with LR. It’s not an enormous difference, but nevertheless still noticeable. This has been my main frustration in trying to develop a workflow for LR…

True. LR just lets you do 1 layer of curves, and 1 sorta layer of dodging and burning, too, so you don’t get to stack effects without a lot of extra work. I get the impression that it’s kind of a reaction to the complexity of Photoshop, and perhaps they went too far in the other direction. Nevertheless, it does get many people out of their fear of post-processing, though my head may explode if I hear someone else asking for a preset: “Your preset took a really good photo!”

I’m glad that’s not just me! I always emphasize that postprocessisng (for normal images) cannot add or take away or fix compositional problems: you can only enhance presentation of what’s already there. ‘Photoshop’ has acquired a bad rep for being a cheater’s magic wand rather than a digital darkroom suite simply because of a few bad and widely publicised cases…

Hi Ming. Capture One Pro would be a more capable editor over Adobe Lightroom. While I currently use Lightroom on many of my images, I am leaning towards Capture One as a future upgrade path. I could afford Photoshop in CC, but I just don’t like being forced into future hardware and software upgrades, especially given the buggy nature of Adobe software.

I’m sticking with CS5.5 for now; it does what I need it to, and the DNG converter is an annoying but still workable bridge solution for future camera models. Even though it adds an intermediate step, I’d still have to go back to PS for the final edits if I was using C1 or something similar…

There have been rare times when clients changed the output intention. When I had the chance to substitute a more optimized file, then the output was fine. One bad example was a magazine that got into trouble, and switched from coated paper to newsprint; the Total Ink was way to high for a night image, and the final print obscured the following page. The problem in the last few years is that many clients cannot imagine that a photographer could deliver optimized cmyk image files. I’ve seen enough screw-ups from temp workers doing bad RGB > CMYK conversions that I no longer deliver RGB for printed output, though I have enough years and experience doing this to get it right on the conversions.

Using +300% or +600% zoom in allows me more accuracy when cleaning up images. While my WACOM tablet is very precise, I like having that extra precision on zoom. Besides, it really does not slow me down. Left hand with the pen tablet, and right hand on the keyboard for quick power user moves.

As far as editing real or interpolated pixels, when there is doubt, you return to 100% view and make a decision. That Command + 1 keystrokes takes almost no time. I’m not really comfortable with calling them “real pixels” because of Bayer patterning and the analogue to digital conversion in the camera. Dust on the sensor may be “real”, but there is no point in it remaining in an image. My goal is not to match the screen to the print, it is to match the scene to the printed output, at least in a manner that best conveys the concept.

Whichever one is cheapest, or cooler, or both. OK, the cooler one. And provided it’s cheap. I’ve actually taken up a part time job (simple translation on my weekend nights) to bolster camera funds. I definitely want to scan myself now. A good lab I found — very pleased with them — gave me the price list for scanning my 120 color negs at a halfway reasonable resolution and 16bpc, and MY EYES NEARLY POPPED OUT OF MY HEAD.

In other news: I nailed a Polaroid back for the SQ! And reduced from 5,525 to 1,050! Sourced from the sticks => the postage is more expensive than the back! 😮

Yeah, hoping that back is in working order, so cheap I took a punt on it. My wife just texted me to say it’s arrived, so I’ll get some PX or Fuji whatitsname tomorrow and have a blast with it. Ye-hay. Looking forward to tomorrow’s lunch hour .
The other side of good fortune: a regular 120 SQ back I got last week for spare/alternative film option, might be a dud. I had a roll of Portra 400 in it, took the completed Portra roll to the lab today and the technician said “ooh, that roll’s wound a bit loose isn’t it mate?” [not in English patois like that; but just to give a flavor for his tone]. I think I’ll wait to get the negs back before loading another roll into the back. Shocker.

On the scanner: as a customer in line –> please don’t do that m(. .)m
Pretty please! 🙂

It’s curious, but the lab where my negatives are developed very professionally and very reasonably — cheaper than the 1hr DPE chain stores! just, but cheaper, and reassuringly slower: two days at least for color negatives; they are probably building a batch of film before processing and that, but I think they also put their back into each roll a bit more than the chain stores do; I’m using Horiuchi Color; I’ll show you when you come over Ming –> a couple of pros here have told me this is where they go and sure enough, in store you don’t really see casual people like me lowering the tone — but yeah, that very reasonable and professional lab, Horiuchi, for 72dpi scans in 8bpc jpeg, charges almost 4,000 for 12 120 scans. As soon as you start wanting 16bpc TIFF at meaningful resolution, the prices are just plain silly. Really. So I’m just having them develop my color film at the moment: I’m crap at looking at color negs, but they — the negs and the developing — seem pretty damn good. Details are details whether magenta is green or blue is yellow, etc., etc. But yes, building up a pile of 6×6 negs over here.

This was why I even dared mentioned a D800E the other week, compared to labs doing it for you over here, you would get your money’s worth from the Nikon pretty soon, though not quickly, by any means. When used D800Es cost what used D700s do now; that’s when the price is right I think. So at least a product cycle away for me still. Until then, my sidekick until burial in the oculus, and still my sidekick even thereafter — the D3 — will do the work.

He’s sporting the 45 2.8P today. Get a lot of double takes from people on the street!

Haha, I won’t. I’m just saying…comparative value and all that. I’ll probably just get them to develop my slide and then I’ll return home to rewash/ scan myself. As for scanning options: a cheapo D3200 would work pretty well too, given the high pixel density…

The 45/2.8P is one of the underrated lenses, in my opinion. Optics aren’t perfect, but the lens has that slight bit of field curvature that gives it character…

Please don’t open a can of macro on me Ming. Even though the scanner is a while off, of course I’ve been trying to get to grips with the glass I’d need and why. And have bumped into all these concepts. We talked glass the other thread so that’s OK; but all this micro/macro & repro stuff isn’t exactly intuitive, for me anyway… But D3200 is APS-C, smaller than the 35mm frame, for example, that I want to scan, so a 1:1 lens won’t do it for me any more now. Same again with the D3 and 6×6 negs, and there’s also the aspect ratio difference meaning wasted pixel utility from the scanner…

I suppose, like birders, etc., I should just stick to thinking of “pixels on subject” in which case high density sensors, any size, are good but crop factors work in favor? –> we should all use m4/3 or ASP-Cs for this kind of copy/repro?

Then again: I’m convinced 1:1 is the ideal. No scaling. But getting that negative and scanner sensor perfectly aligned is probably ridiculously hard. And there’s no dependable (and tonally more neutral) scanner option for 6×4.5, 6×6, 6×7, etc…

Hello Ming, Tom, Gorden, Andre, Michael,
I’m new to this site, so I just read these comments.
I’m not here to reopen them with some questions or info input/output, just to share that I really enjoyed reading this most entertaining conversation. Just for the fun of it I might read it again some day with a Hoyo de Monterrey (Epicure #2 for me) and a Lagavulin at hand. Hope to find some more to get me past those long winter evenings that are starting here…
Please keep up this most exquisite form of infotainment. (Sorry for that last word…)

I have varied the post-production clean-up depending upon the output needs. Usually I move in 300% for that, but on critical elements I have gone to 600%. Definitely a WACOM tablet makes this much simpler and faster. It may not seem exciting, though it is incredibly important. Note to enthusiasts, when you use a variable of magnification not at a 100% multiple, then Photoshop interpolates what you are viewing on the monitor.

You should see me working in Adobe Illustrator; quite often I go in to 1800% to clean up points and anchors. Basically the 300/600 is a remnant of 19 years working in Photoshop, beginning when file sizes were usually smaller, and soft proofing was rare. I suppose 200/400 may speed things up and make my life easier. 😉

Haha! Actually, with larger files, I find you can get away with retouching at slightly lower magnifications because the repro magnification usually isn’t as high. When files were small, every single pixel has to be perfect because errors get interpolated…

Gordon, I agree with your comment here, When I develop photos and go in for the final touches I prefer a Wacom. I show people how they can accomplish the final development on a budget so I share gimp and it’s methods. I myself am still learning what detail to leave, what to remove and the difference. This entire article and comments is a huge differnce in that regard.

I wasn’t shooting product then, but I was told that the physical items were very carefully selected beforehand for minimal blemishes/ imperfections; lower reproduction resolutions helped mask defects and dust. There were also of course retouchers that would work on prints with brushes and paints. These days, however, we enlarge details to previously unheard of sizes, so the entire imaging chain has to tighten up commensurately…

A friend of mine had a small printing house in the pre-digital era that specialized in printing packaging specifically for photo shoots. He showed me sheets of Wrigley gum wrappers he’d printed, all-extra sharp, extra-vivid, and perfect in every respect. From those, someone in the company assembled perfect gum packages for the photography.

Sadly, this doesn’t happen anymore: we’re told to make up the difference in Photoshop. On a recent assignment, some of the product was so bad (prototypes) that I had to shoot different items and composite them afterwards to get one passable one…

Here is something I noticed a few days a go while editing a wedding portrait: The basic consumer does not always notice the work at all, such might be the case in this watch too. While I worked about 30 minutes to remove blemishes, cleared out a pole, and some random flinging hairs, cleaned the harsh midday sun blue shadows from a dress, corrected the exposure, made the sky pop etc etc, my fiancees friend wanted to see what I was doing. So I showed her before and after, and all she could see wast the exposure. Bleh.

Anyway. Post production is as important as is the actual shoot. I think it might be more that the average consumer is already used to high standards, they don’t even know where to look for the imperfections. But I bet they would notice if they looked at the image forr a minute or two.

And then, I had weddings in which couple did not want any pp at all. All jpegs, dumped straight to a hard drive. It was an issue of money, and business wise, I should’nt care. As a photographer, I really hated the notion of unedited work floating around.

Very true: the consumers want more headline numbers – resolution and volume – but aren’t aware that the work required afterwards balloons exponentially. Most of my high end clients can tell, though.

A lot of the time, postprocessing and the actual shoot are inseparable especially for product shots because what the client wants is frequently physically impossible to achieve in a single image – thus we have to plan accordingly.

I feel your pain on releasing unretouched images: I don’t do it at all, as a rule. Though that said, I suppose film work is a key exception here since there’s no postprocessing going on in the same way I’d do with a digital file; or perhaps the analog would be consumer minilab and photo CD = JPEG, DIY developing = RAW…

Trackbacks

[…] wrinkles, fold lines etc. In fact, the process is fairly similar and equally as painstaking as commercial retouching for watch photography. Retouching complete, I made a copy of the base layer, and desaturated it to work on the luminance […]