The coolest technology to come out of Adobe MAX is, sadly, not the technology we already have access to. Like Adobe's Project Cloak we showed you earlier today, it's the incredible 'Sneaks' sneak peeks that really wow the audience. Case in point: check out Project Deep Fill, a much more powerful, AI-driven version of Content Aware Fill that makes the current tool look like crap... to put it lightly.

Deep Fill is powered by the Adobe Sensei technology—which "uses artificial intelligence (AI), machine learning and deep learning"—and trained using millions of real-world images. So while Content Aware Fill has to work with the pixels at hand to 'guess' what's behind the object or person you're trying to remove, Deep Fill can use its training images to much more accurately create filler de novo.

The examples used in the demo video above are impressive to say the least:

And just when you thought the demo is over, you find out that Deep Fill can also take into account user inputs—like sketching—to completely alter an image:

In this way it's a lot more than a 'fill' feature. In fact, Adobe calls it "a new deep neural network-based image in-painting system." Check out the full demo for yourself above, and then read all about the other 'Sneaks' presented at Adobe MAX here.

Comments

The result of "deep fill" on the people in the rock arch was pretty awful, requiring at the very least use of the clone stamp. And it would be very simple to remove those people against a blue sky without this chicanery anyway.

Looks great if you can understand he is talking about. What happened to the one exposure one image - the skill is in the person who presses the shutter. Remember the day when, to enter into a competition, one had to submit affidavit to confirm that the image was not altered in any way. AI is not photography. It is incredibly clever but not real.

I like photography but I am neither a pro nor an artist. Just having fun.

After thousands and thousands of photographs it appears to me that the appeal of AI photography is, at its very best, extremely limited. It may be useful to me once every 2218 photos.

Th last time I had an award winning picture that required a pimple removed was in 2008. Oh, i forgot, but yes, in 2013 I had this insane photo of my girl friend and right in front of her nose there was this ball. If Deep Fill had been available, then I could have restored her nose and she would have been fitted with a new nose, learned by analyzing litterally millions of nose photos.

And right next after this, Google will propose to replace our neurons with AI neurons, trained by the analysis of litterally billions of thoughts.

You take a photograph of a group of people. There are 9 white people and one black person in the group (or vice versa). Automated AI grabs hold of your image and says to itself "something's wrong here" and changes all the people in the group to the same colour.

You take a landscape and there are a couple of electricity pylons within the frame. Automated AI grabs hold of your image and says to itself "something's wrong here", and adds another 5 pylons to make the composition "correct".

So thrown off by how many people thought that, before the invention of content aware fill, photographs told the truth. This is Day 1 stuff in any photography class: no photograph has ever told the truth. Every decision little decision you make projects your viewpoint onto the scene. We're talking about the sophistication of the tools you use to do that, nothing more.

Nothing wrong with this as art, everything is wrong with it as record. If you look closely at the resulting images, you'll see that it doesn't just remove the intended item, but actually changes everything within the rectangle. This is most obvious in the video where they remove the second group of people in the second image and the rock structure behind them changes dramatically, but it's also visible in the last image (upper right corner of the rectangle). If you can select a more precise crop region (shaped to the subject to remove), then presumably it will not alter the surroundings, but changing things that aren't occluded is qualitatively different from hallucinating things that were not visible.

There are still various ways to affirm the out-of-camera validity of an image, for example, the methods used by http://www.fourandsix.com/

mgrum - You have made some very good points. As with most technological advances, there are always benefits that critics overlook. The uses you describe are certainly legitimate and desirable by many.

However, it's likely that a high proportion of camera users (note I don't use the term Photographers) will decide that it's easier to produce images by fakery.

That encourages laziness at the taking stage. People will make even less effort to compose a photograph properly. That will take away much of the enjoyment and most of the skill needed to take a good photograph.

Eventually the process will become automated, in image editors and later in cameras. Users will initially have the choice whether to opt out, but even that will eventually disappear, and the art of photography will suffer.

It's already very difficult to find an image that hasn't been altered in some way. Photographs are becoming increasingly truthful, and before long will become a complete lie.

People have been decrying new technologies due to what they see as the inevitable losses of older practices since Plato, who (in writing) essentially predicted the death of memory because of the growing popularity of writing. New technologies change things and we adapt to those changes. The same predictions you are making here were also made about calculators, computers and ... digital cameras!

phototransformations - I can tell from your handle that you like to transform rather than to capture! I prefer photographs to be an accurate representation of the beauty of the scene or subject that prompted me to photograph it.

@entoman, do you have the same objections to the manipulations photographers such as Ansel Adams did in the darkroom? All that dodging and burning in and, not to mention using red or yellow filters to turn blue skies to black, and for that matter black-and-white photography itself (and long lenses or wide lenses) are all conscious manipulations of the images that most people see to feel, except in journalism, are fine. This doesn't seem substantially different to me. It's just another tool to try to achieve a vision, which you can use or not use as you wish.

phototransformations - I see a fundamental difference. I see nothing wrong with using filters or with dodging and burning, as these are used to INTERPRET and ENHANCE an image artistically, but do not alter it.

What I don't like is the prospect that at some stage in the fairly near future, editing sofware, and at a later date cameras themselves, will use AI techniques that take control away from the user.

AI uses intelligent learning, which means that it will learn your preferences and use them on future images. You will be presented with a "before" and "after" image and be asked to choose one or the other. If you frequently choose the image in which AI has added or removed "blemishes", the software will "learn" and will apply that preference to all of your images.

At first you will have something that "you can use or not use as you wish", but after a time that choice will be taken away from you.

@entoman - "That choice will be taken away from you" is not what I've experienced so far in my lifetime (and I'm 66) with technological advances.

I still drive a stick shift car, though automatic transmission vehicles have been around for generations. I still wash dishes by hand, though dishwashers have also been around for generations. And I can still shoot in manual mode, with everything the same as it was in 1969, when I bought my first SLR except I'm shooting digital. "Program" mode "takes away" my choices and "Auto" even more so, but I can choose not to use them. Maybe generations from now the camera will do everything for us, we won't have cars we can drive ourselves, and we'll live in Wall-E world, but I don't see this happening soon. There're always people who like to do things themselves, particularly in creative fields, and therefore there's always going to be a market for products we can control ourselves, even if we're considered "retro" when we do that.

Coming soon: cameras that don't have memory cards or inbuilt memory, but instead transmit all your images directly to "the cloud", with the manufacturers charging you a monthly fee to access your pictures!

I propose a new term "snapograph" to describe images that have been produced with little thought, and can only be "rescued" by adding or removing conspicuous features such as people in the wrong place, or trees growing out of people's heads.

Another term "journograph" could describe images that are intended as factual representations, and are absolutely unaltered, with the exception of a limited degree of cropping.

I hereby define a "photograph" as a previsualised image, carefully composed, and (where desired) subjected to manipulation of colour balance, brightness, contrast, and saturation, with the intention of producing something with artistic merit. A very limited degree of "healing" is permitted to remove dust specks or unwanted artefacts such as out of focus highlights that render as aperture hexagons.

See also: when cinema camera companies show demo reels full of shots from major motion pictures which are like 80% CGI. What do you think I learned about you from a Transformers shot except that you know Michael Bay?

Easy to answer, it was the day you stopped using film! Every pixel in every digital photograph you've ever taken goes through a computer algorithm that manipulates it before it gets stored on your memory card. Even if you are shooting RAW.

How to lie with photo technology. I object to image filling methods, whether AI based or not. Once, you start removing objects from a photo, you are creating a lie. If you want to create art, then do so and say so to your viewers.

The power of photography loses something important when viewers cannot trust that what they see is real. Thumbs down to Adobe's Project 'Deep Fill'.

David, I understand where you are coming from, but historically photography has always involved manipulating images to some extent -- it's just gotten easier and more refined.One of the most amazing prints I have ever seen is an image titled "When the Day's Work is Done" by British photographer Henry Peach Robinson in 1877. The image is a composite from 6 six negatives.I agree that photographers who substantially alter their images should be clear to viewers that that is the case.That said, I have always admired the creative work of photographers like Jerry Uelsmann -- precisely because he makes no pretense about the realism of his images.

I think the biggest concern of people is not really people who want to make "Art" and edit it like many other photographers already did back then or, even more, painters (they could always easily add/remove features from the scene they were portraying).

The issue here, IMHO, is to create fake NEWS picture, for example. Even more than it has always been. Or pictures that wins prizes and such.Situations were one would expect to see a real thing and instead is getting a completely manipulated image.

CC is going to have more and more value for the subscribers. Initially, CC made "no need" for other software solutions. But in the near future, buying the camera won't be necessary to make beautiful "photographs".

And now we know the technical reason why Adobe is moving Lightroom to a suite of Cloud tools. Their next features will require Big Data processing impossible without a CPU farm. Of course, there's the question of whether WE actually need features like this.

I don't work for adobe, and I don't appreciate your ad hominem attack. You'll notice that I questioned the need for such a functionality, which is something that an adobe employee would be stupid to say.

Yes, there is. It's called Sensei. It's the same reason why Google search isn't a tiny app on your computer or phone. Google is crawling the entire web continuously to map webpages and content. That takes warehouse sized server farms. Then all you do is use a thin client on your device to enter the search target and retrieve the results. Adobe's Sensei is the same idea, only using images. It is a learning-type of tool that requires vast amounts of input to understand content relationships.

But, yes, if Adobe has spent the time to develop big data driven tools like Sensei, then it sees an opportunity to profit from it. But that's why any company develops products.

Now, if you're ticked off by Adobe's subscription model, say so. I'd agree with you that their handling of the cloud transition has not been transparent or trust-inducing. But part of their justification for going to a cloud of micro-tools does make sense.

I know perfectly well about the various ways to use deep learning on big data and it has absolutely nothing to do with adobes decision to go to a subscription model (where they have to offer some type of cloud services to justify the massive increase in cost). You're conflating them finding ways to monetize their new found data with their reasoning for having cloud services in the first place, which was to charge people a monthly fee.

Yes, it could go either way. Certainly Adobe was struggling to maintain steady income growth with a suite of products entering maturity and therefore less and less able to entice customers to upgrade. The subscription model nicely deals with that nasty little problem. I'm sure that if the camera manufacturers could do the same, they would.The cloud/rental model allows software manufacturers to effectively enforce the ownership rights that they have always claimed - that's what a license is; we as customers have never really own our software tools but only use them for a specified length of time and purpose.Adobe's original justification for moving to the CC model was not tools but reliable and regular updates more frequently, more targeted, and better tested than the massive yearly or biennal version pushes they had been doing. So their cloud connection was really a phone home link.I guess the question is one of what is a fair price for the capability. Certainly $240/year is ouch.

Certainly we accept the idea of renting a car - it's called a lease - as long as we accept that the low cost of acquisition means that even though we might drive the latest iteration of a car, we never own it - and the car manufacturers accept that in order to enforce their ownership rights they'd have to send repo agents to confiscate their now-stolen property.

It's only in the software industry that we can expect to be shut down automatically and effortlessly by vendors at their whim - but with the Internet of Things rapidly evolving, I would not be surprised that our connected toys and vehicles and such will eventually demand their monthly fee and the repo agents will be obsolete.

It's all about reversing the nature of the free market - instead of buyers choosing their manufacturers, now the reverse is becoming ever more true.

I'd also note that the up and coming generation places little value on ownership of - anything, really. What is important to them is access. Ride-sharing services and self-driving cars are just examples of this new thing-as-service world. I can't say I disagree with it...the supposed freedom and convenience that a personal car provides comes with a staggering price tag for most folks. Far better is to rent what you need when you need it, and leave the maintenance costs to the rental vendor. Ride-hailing and self-driving cars will only quicken the transition to a tiny-house just-in-time way of life.

ozturert - Yes, I've noticed a recent pronounced tendency for dpr to use sensationalist and sometimes misleading titles. It seems to be a recent management decision, designed to grab attention and boost dpr readership.

The fact that it also whips up a frenzy on the forums is of course even better for dpr. The more time people spend here, the more likely they are to buy something via the Amazon links on every page.

Not a bad thing really - it's what pays dpr staff wages, and enables us to have access to all the reviews.

It only samples what is in the picture already which is the same as all current algorithms. IF it works then an AI based solution would create new pixels based on what it has learnt from other images. I am sceptical because a lot of auto fix software works really well in examples but less well in the real world. But if it did it would be completely different to the sample you have posted. Why not try it on the 1st Image from the article to see what happens.

It would be nice to try, if you can find the original source file of the sample photo in the video. Now it has the square white box around the "obstacle" that needs to be removed, so it is not fair test at all.

And many of the photos there are, Affinity would definitely clear up as nicely.

But where Adobe really has the edge is the "reshape" by the drawn shape form. But that requires to use on Affinity tools like a "Liquidity Persona" afterwards with a layermask.. So what Adobe does, it combines a two tools for one.

Some marketeer came up with the buzzword 'deep'. Now it has been glommed onto and applied to almost everything marketing is trying to sell. And don't get me started on fake AI claims. Put on your hip boots - it's getting "deep."

@mattd007 "Unfortunately, the deep-learning catch-phrase is now morphing into the more general and ambiguous term of AI or artificial intelligence. The problem is that terms like ‘learning’ and ‘AI’ are overloaded with human preconceptions and assumptions – and wildly so in the case of AI."

:The real tragedy of it all, is that soon we dont need to bother taking pictures at all anymore what so ever. As everything gets more manipulated and faked out, the act of taking a picture of anything for the purpose of portraying any kind of reality, becomes pure obsolescence. We just need 3D to be come better and eventually good enough to compose any image with whatever elements we want in it.

And hey, I have nothing against it and progress surely is inevitable. Just that the time is soon coming to call "photography" something else as it irreversibly fades out from what it once meant or was supposed to be.

3D CGI is mature for quite some time now and is able to produce real-life quality fairly easy. The only problem is time and resources it takes, it's still a lot quicker and easier to capture and PP a photograph than build it in 3D. But I agree, we will see that charge very soon.

Yes, "authentic" photography may become a niche, however, such status might amplify its appeal to some of us who like "swimming against the current". Kind of like film photography now, or vinyl discs in music. So I think real photography will survive.

To me, this is no different from adding artificial flavours and colors to food. As long it is "stated on the label", I see no problem with it. The problem of course that, unlike food packaging, there is no full disclosure of the degree of manipulation with photographs right now. I think a simple scale of what percentage of the image content was "artificially added" would be a fair addition to the exif data. Yes, I know exif data could be altered or erased, blah, blah. Still, I think we would all feel better if we had some sort of way of knowing whether it is an "organic" or "processed with artificial additives" photograph.

PS has had things like multi-step undo for a very long time: It's called the history panel and you can select how many steps you want saved - hundreds if you'd like, then with a click, you can back up as far as you'd like. It's far better than hitting control-Z many many times. Certainly no application is perfect, but you're criticizing them for lacking something that they've had for years.

To me, content aware fill was the best innovation PS offered in 10 years. This takes it to a new level. I'm not sure how much I would use it (traditional CAF works fine for 90% of my requirements), but it will be nice to have when I need it.

The content aware really mainly changed how the work was faster. If you used a clone stamp or a simple selection to do a selection, add it to a another layer and then come with a layer mask to clean the edges and use little a clone stamp to fix the edges, curves or levels for color (like skies etc), you got often same result.But when difference was like couple minutes vs couple seconds, it was a time saver.

On this line, you really ought to try out the Affinity Photo healing tool.

Instead of repeatedly trying to outwit Adobe's 'guesses'/'ai' cleaning things near a dark boundary, you find a small _window_ moving with the mouse so that you can see exactly what you're going to get.

As well, the tool has smart boundary-noticing , which lets you go right up the edge of a cleanable area, and with a clever rolloff, correct just as you need.

I found this out after many minutes getting nowhere with a 'let's just use Photoshop quickly' foray -- and then took beneath-flowerbox rust stains off a stucco outside wall, preserving its weathered look, in one pass and a few seconds only.

Other people can indeed do very clever clever optical algorithms, and with great user experience.

@NarrBL, thanks for the Affinity tip. I pull Affinity out from time to time for its path clipping abilities which work better than PS in many cases. I'll also try out this competency you just mentioned.

Never once in history has a photograph recorded the "truth". Every image is one view manipulated by the lens, sensor, perspective and in today's digital world, the internal processing the engineers at the camera company decided to put in the camera for you. Personally I am not interested in photography for documentation. It's a medium of expression, not a xerox machine.

I'm absolutely for recording the truth. Fake content is just lame. BUT! There are occasions when you have a great shot but one freaking item destroys it. I recently had a spruce in the wrong place that pulled the eye on it. There wasn't really a different spot as this patch of extreme red just grew there. I needed several passes of Gimp-Resynthesize to remove the spruce. Having a better content-aware fill would have resulted in a better photo and likely faster.

And hopefully having such tools at hand makes it less likely that people just remove plants of parts of them in the field to get a better composition. Which drives me crazy especially if "protographers" advertise such "techniques" in their videos.

I agree completely with you - that is, if it were not for the fact that there is no 'truth'. During our perception just 10 % of information processed stems from our senses. The most part comes from our previous experience and is used in processing/interpreting the incoming stuff.

But 'honest photography' is still something to aim for. Am starting to wonder whether slides/trannies photography could be part of my future - and not only of my past.

"Adobe's Project 'Deep Fill' is an incredible, AI-powered Content Aware Fill""When we can do that with a single line draw already in a second, this ain't so impressive what Adobe will do."Dr Phi...."Someboy's lyin here."=)

Don't you think it is a bad way to start discussions or even communication thinking by default that other is lying unless otherwise proven?Shows more about your character being a dishonest as you expect others to be so....

It's Dr Phils character you need to check on....He's the one quoted....I bet if he was sittin between you and the person that wrote that title, that's exactly what he'd say.Gotta make sure those arrows are pointed in the right direction....:=)

I remember some years ago seeing a similarly impressive demonstration of the "Shake Reduction" option under the Filter menu. I had some minor success using it but after a while gave it up since more often than not the results were poor. I hope this new feature doesn't disappoint me down the road too.

I assume that is in response to where I said, with tongue firmly in cheek "And what about all those lovely people shots ruined by having landscapes behind them"

Well, actually there are many such scenarios - the most well known one being the portrait in which a tree or pole appears to be growing out of the subject's head. Think about it, and you'll realise that there are many situations where hurried and thoughtless snapography results in distractions behind (or in front) of the person being snapographed.

Real photography is about LOOKING at what you are doing, and making sure that you COMPOSE the photograph in such a way that the distracting object doesn't interfere.

This entire thread reminds me of the constant b/s of film vs digital, jepg vs RAW, zoom vs prime, nikon vs canon, and on and on.

Photo's have always been altered. Always. The lens, film, camera, filters and now the possibility of extreme extended post have pretty much been there since the beginning.

HDR is not a new thing, it was done in the 1800s. "Stitching" panoramas was also done. Using lenses other than 35-50mm in SLR format is a form of alteration. Yet even 35-50mm lenses do not capture what our eyes fully see. It can be argued that if you do not shoot a 180 degree or so pano you are not capturing what we see with our eyes if that is what is meant by a "true" photo.

Photography today is a set of tools just like it always has been. The tools today are more diverse and more powerful than ever. Use the ones you want to, drop those that don't fit your needs. Stop debating worthless topics like this.

i.e. if you add or remove something that was not present (if I said "visible" someone would bring up IR/UV) in the original scene (even if you had to turn your head to see it... like enjoying a panoramic view...) then it's something other than a photograph. IMO - but I realize that stating that it's my *opinion* will not stop someone chiming in to tell me I'm wrong *sigh*

Latest in-depth reviews

The Nikon Z6 may not offer the incredible resolution of its sibling, the Z7, but its 24MP resolution is more than enough for most people, and the money saved can buy a lot of glass. Find out what's new and notable about the Z6 in our First Impressions Review.

Many cameras today include built-in image stabilization systems, but when it comes to video that's still no substitute for a proper camera stabilization rig. The Ronin-S aims to solve that problem for DSLR and mirrorless camera users, and we think DJI has delivered on that promise.

The SiOnyx Aurora is a compact camera designed to shoot stills and video in color under low light conditions, so we put it to the test under the northern lights and against a Nikon D5. It may not be a replacement for a DSLR, but it can complement one well for some uses.

At its core, the Scanza is an easy-to-use multi-format film scanner. It offers a quick and easy way to scan your film negatives and slides into JPEGs, but costs a lot more than similar products without a Kodak label.

Latest buying guides

If you're looking for a high-quality camera, you don't need to spend a ton of cash, nor do you need to buy the latest and greatest new product on the market. In our latest buying guide we've selected some cameras that while they're a bit older, still offer a lot of bang for the buck.

What's the best camera for under $500? These entry level cameras should be easy to use, offer good image quality and easily connect with a smartphone for sharing. In this buying guide we've rounded up all the current interchangeable lens cameras costing less than $500 and recommended the best.

Whether you've grown tired of what came with your DSLR, or want to start photographing different subjects, a new lens is probably in order. We've selected our favorite lenses for Sony mirrorlses cameras in several categories to make your decisions easier.

Whether you've grown tired of what came with your DSLR, or want to start photographing different subjects, a new lens is probably in order. We've selected our favorite lenses for Canon DSLRs in several categories to make your decisions easier.

For the past few weeks, our readers have been voting on their favorite photographic gear released in the past year in a wide range of categories. Now that the first round of voting is over, it's time to pick the best overall product of 2018.

Sony had the full-frame mirrorless market to itself for nearly five years, but it's no longer alone – the Nikon Z6 and Canon EOS R have both arrived priced to compete with the a7 III. We take a head to head to head look at these three cameras.

As if it needed one, the triple-camera smartphone might really be the final nail in the compact camera's coffin. DPR contributor Lars Rehm brought the LG V40 on a hiking trip recently and found it to be a huge leap forward in terms of creative freedom.

Renowned UK-based landscape photographer Nigel Danson has been using DSLRs for years. In this video, created exclusively for DPReview, Nigel discusses his experience using the Nikon Z7 and why he's excited about mirrorless cameras. (Spoiler... beautiful scenery ahead.)

Chinese optical manufacturer Kipon has added the Nikon Z and Canon R mounts to its range of adapters made to attach medium format lenses from Hasselblad, Mamiya, Pentax and others to full frame cameras.