Adobe uploaded a sneak peek at a future update to Photoshop CC yesterday, and it's going to make a lot of people's photo editing lives a whole lot easier. Using its much-lauded "Adobe Sensei" artificial intelligence technology, Adobe is finally taking subject selection in Photoshop to the next level.

As advanced as Photoshop has gotten, cutting people out of your images—for whatever reason—is still an incredibly cumbersome process. Throw in some frizzy hair or a lack of contrast between subject and background and it turns into a full-blown nightmare. Enter a fancy new AI-powered feature called "Select Subject."

The tech "uses machine learning to detect objects in the image" and lets you get a nearly-perfect starting selection in "just one click."

What's more, this works with multiple subjects too:

Check out the video above to see the full demonstration. As with most of Adobe's AI-powered features that have been teased lately, the 'one-click' results are impressive. And unlike those features shown off at AdobeMAX, it's already in the works for an upcoming update of Adobe Photoshop CC.

No word yet on when exactly that update will arrive on your desktop, but we can't wait until it does.

If you pause the video and take a look at the selections, there is a considerable amount of cleanup required. Given the relatively good subject-background contrast in their demo shots, it isn't a lot better than what is already there.

Impressive wouldn't be my choice of description, but it might be if they improve it to cleanly select subjects in difficult selection scenarios. Also, they should have used someone who knows how to use PhotoShop for the "before" examples. Almost anything looks better than someone half selecting an object with a shaky hand.

With the revoking of net neutrality how easy will it be to get to the Adobe web site or cloud. The PS review is very impressive but I have little use for it as an maturer photographer in my later years. Hope the rest of you enjoy this.

Great observation. I just did a quick search and apparently, like all graphic arts terms, deep etching means different things to different people. One group uses it to mean clipping path, a second considers it a synonym for mask, and a third group use it to mean deleting the background around the subject and leaving the empty space transparent. All three are completely different processes—pretty dangerous situation. I've always called them dropouts, but I'm from Los Angeles so maybe the terminology varies by region.

Okay, and so what ? Who cares what you call it. I'd prefer to call my work photo-art, over photography anyway.Personally speaking, I'm glad to see old school, purists, reference shooter types going away. Tweak it in every way imaginable, just as long as the end result is amazing ☺️

@BadScience .... Creative, yes, and as for the majority of users being non photographers, i agree. But, this article is directed at photographers on this site, like you said, amateur ones...so can you imagine the abortion of images that will ensue.....and as @christiangrunercom said above your comment.."you can make the image suit your vision"...fine, but he has agreed with me, it's not photography...it's a make believe world he is describing that doesn't exist.

If an amateur photographer wishes to use it to create a mess, that is up to them. Such people will create a mess with or without photoshop. Some deep-etching I see from even *good* photographers is absolutely appalling, and generally goes unnoticed by people that do not know how to deep-etch and have spent, literally years, doing it.

And advertising - for example, is a make-believe world. Why send your models, or products to some exotic location when you can shoot them against a green screen in a shed in Slough and put them anywhere using software. This has been going on since the year dot in photography. Adobe are making it easier. It's a feature designers and layout artists want.

I think this will be a good time saver once its matured, but as with the Content Aware tool demos, the images shown all seem thave been taken to help with showing this effect. Narrow DOF and good contrast. These would probably be easy to make using the existing tools. Still, it shows that Adobe are still pushing ahead, and not just taking our money (which I have no issue paying, as its a service and not just a program) and stagnating. IMO with cheaper programs like Afinity and Luminar showing a very mature set of tools at a much lower price Adobe has to keep pushing ahead.

Don't forget that Adobe didn't invent Content Aware - it bought in the technology from external researchers. It would be wonderful if the lethargic Adobe developers could do something useful, like providing a colored icon option for the UI.

Staying with the latest tech is all good, and I'd give them $50 a year for the newest version, but if they try to make me pay $1 a month, I'd rather just hack it ! Screw monthly subscriptions ! I don't have time for all that, and i sure don't want any kind of automatic withdrawals from my account.

More lazy journalism. "It's going to make a lot of people's photo editing lives a whole lot easier" How do we know this to be true? Have you personally tested it or are you like Barney who writes articles based on zero hands-on experience and no thought but is shocked when people don't fall at his feet with rapturous praise? Adobe puts out a lot of buggy software; how do you know you're not misleading your readers?

I'm excited. Just terminated my 4 years subscription and waiting to upgrade Capture One from 9 to 11 (for Sony, of course). Photoshop would do better with some AI for its UI. And discard 3D: it's absolutely useless and non-functional

I could recommend RNI for C1 if you're in applying old look formulas to your images: http://reallyniceimages.com/My iOS free packs from them are interesting (I missed the 'analog' photography movement)

I'm excited to see some of their Sensei tech show up in Photoshop! Wish there were a release date. This would definitely save me a lot of time, even if it is just a faster start before going in to refine the details.

Hopefully this will work out in PS, as it could be used for making tough selections (hair for example in PS) much easier and faster to do, rather than having to use tools like Refine edge, etc. This would still require you to shoot a model on a white or other background where the AI can quickly figure out what is hair and what is the background. I know there are tools that work similarly to this, but probably not the level of ease this could bring to making difficult selections easier.

I wanted to ask the same question: is the work done within the client or is it handled elsewhere? Adobe already says that they have to 'evaluate' your images to apply machine-learning actions to them, but you are provided a means to opt out, which I would assume would break the process.

What if "Select Assist" is broken without evaluation by a machine-learning process. What if every image you work on has to be sent offsite for 'evaluation' by Adobe's servers every time you make a selection. The privacy, security and data usage issues are enormous.

OTOH - it's possible (I think) that they trained the machine-learning algorithm against a set of images, and then simply load the results of that training into the software which runs on the client. Or, maybe they transfer a limited set of data from the client to the server for some part of the analysis (i.e. to identify the type of scene from a low-res copy), but the real work (actually generating out the detailed mask) happens on the client.

I wouldn't bet on it (at least not initially). If they did, it will be PS first, and then maybe LR a few years later (or a year later). I'm personally about ready to dump LR and just use PS for all my editing as the tools in PS are much better (a bit harder to understand and find sometimes, but I think the quality of some edits, spot removal for example, is far better in PS).

I should probably add that I'm not editing thousands (or in some cases) even hundreds) of photos per week (since my schedule has caused me to reduce the number of hours I can spend shooting each week. So, for me, almost each photo is something new, and I maybe do 50 seriously edited photos a month (out of perhaps 200-300 taken), and granted that comes out to about 3-4 per day--but I usually do most of my editing on weekends for a few hours each day. So, in a nutshell, the LR workflow is nice, but I can live without it too. I also am concerned about putting all my photos into one single system (at least with PS, I can export them to other formats and edit in other programs, which can be done in LR too I guess, but you're options are limited).

Granted, Adobe has said they will agree to continue to support LR, some of us have found that to only be partially true. PS, I don't see them abandoning anytime soon, as it's one of their flagship products.

This looks a little better than Topaz Remask and a little worse than Vertus Fluid Mask. What I'd like to see Adobe automate is the decisions on whether to expand the selection, reduce the selection, feather the selection edge, and if so, by how much. Many too many decisions to make.

Well, if it selects the hair just fine and I have to manually add in the hand, I'll take that in a heartbeat over the reverse. Selecting hair is maybe my least favorite task in photoshop. Even is this just gives you a starting out point that needs refinement, i will use the crap out of this.

The awesome thing about machine learning is that results improve the more it's used -- Adobe might have even implemented technology that allows them to include the results of end-user interactions with this feature for this purpose.

Otherwise, it'll only improve as Adobe trains it -- but that's still a huge improvement over static algorithms, which only change when they're reprogrammed, i.e. when the software is next updated.

@ptox: I wouldn't hold out much hope for "results improve the more it's used". While it's in general true that that is how ML works, it seldom works properly without a very specialised supervision, i.e. it needs feedback from an expert to learn. And that is rather difficult to build into the consumer product standalone.

There is a big difference between AI and machine learning. This new process uses machine learning which is actually relatively mature tech and has nothing to do with programs making independent decisions.

What is the selection based on? Essentially it's magic wand with a different and more complex set of parameters. Contrast? Color variance? Focus? How does it define a subject? Is a car a subject? Seems to be color difference.

Does that mean Photoshop will be downloading a vast database to my hard drive for it to work with? Or is the vast database only necessary for the machine learning stage? (Or, God forbid, will this feature only work when my computer is connected to the Adobe mother ship?)

It's interesting that even in the scripted examples, it shows off just how limited the implementation is. The leash, the dog's tail, the man's right leg, the dog's left leg and his feet all will need to be masked in manually. And that assumes that this content-aware AI is an absolute wizard at figuring out hair transparency and is able to feather that in in a convincing way.

So while it's useful, and it looks like it might save a minute or two in producing the initial selection, the hard tedious work of refining all the individual edges is still fully up to you.

Then again, it really depends on the quality you're expecting. If you are the sort of user that doesn't mind the edge defects from the iPhone's Portrait Mode, this could be a miraculous time saver.

Anyway, on a scale from 1 to Impressive, this is certainly no Content-Aware Fill.

bjorn, fuego etc: Nonsense. Manual selections are great, but the name of the game is quality while saving time. There's no great secret to speeding up manual work - it's pretty slow. Slow costs me money. If there's a technique that maintains the necessary quality while speeding up the bottom line time cost, I'll use it.

Sometimes the auto tools save huge amounts of time - and sometimes they don't. Often they provide a starting point. That's okay. Just because it isn't 100% perfect does not mean it is 100% useless. For some reason, people on the internet often think that.

I make a living doing this and use all the tools given at some point. They all have strengths and weaknesses depending on the context.

nachos, so people who don't like paying for software that they will never own are not professionals? Or maybe people who disagree with you are not professionals? Where do we collect the professional membership cards from you anyway, seeing as you are deciding?

@nachosSuch a BS. Do you subscribe for your car? There are people which prefer to rent a car or to buy a car. This has nothing to do with professionals or not. Furthermore Adobe is trying to ransom users to get their files into the cloud. I would like to have a choice to buy it as standalone version, too.

@Stereodesign. With perpetual licences, you own a copy of the software under the terms of the licence. You are entitled to have it, keep it and are responsible for its use. The licence normally stipulates that you cannot duplicate or modify and resell the product.

You can't produce or sell copies of other copyright or patent protected "hard" goods either, yet you might consider you "own" those when you have paid for them. (e.g. watches, electronic items, etc.)

Under subscription, you agree to temporary licences that are withdrawn if you no longer accept changeable terms that are at the discretion of the software supplier. The changing licence terms could be anything from price increases, restrictions on use, or ownership of work created with the product.

If they'd shown something like this working 100% perfectly I'd be highly suspicious of the demonstration. This shows it working well enough to save a lot of time, even if it still needs some manual tweaking.

Latest in-depth reviews

Canon's EOS R, the company's first full-frame mirrorless camera, impresses us with its image quality and color rendition. But it also comes with quirky ergonomics, uninspiring video features and a number of other shortcomings. Read our full review to see how the EOS R stacks up in today's full-frame mirrorless market.

No Nikon camera we've tested to date balances stills and video capture as well as the Nikon Z7. Though autofocus is less reliable than the D850, Nikon's first full-frame mirrorless gets enough right to earn our recommendation.

Nikon's Coolpix P1000 has moved the zoom needle from 'absurd' to 'ludicrous,' with an equivalent focal length of 24-3000mm. While it's great for lunar and still wildlife photography, we found that it's not suited for much else.

The Nikon Z7 is slated as a mirrorless equivalent to the D850, but it can't subject track with the same reliability as its DSLR counterpart. AF performance is otherwise good, except in low light where hunting can lead to missed shots.

Latest buying guides

What's the best camera for under $500? These entry level cameras should be easy to use, offer good image quality and easily connect with a smartphone for sharing. In this buying guide we've rounded up all the current interchangeable lens cameras costing less than $500 and recommended the best.

Whether you've grown tired of what came with your DSLR, or want to start photographing different subjects, a new lens is probably in order. We've selected our favorite lenses for Sony mirrorlses cameras in several categories to make your decisions easier.

Whether you've grown tired of what came with your DSLR, or want to start photographing different subjects, a new lens is probably in order. We've selected our favorite lenses for Canon DSLRs in several categories to make your decisions easier.

Whether you've grown tired of what came with your DSLR, or want to start photographing different subjects, a new lens is probably in order. We've selected our favorite lenses for Nikon DSLRs in several categories to make your decisions easier.

What’s the best camera for less than $1000? The best cameras for under $1000 should have good ergonomics and controls, great image quality and be capture high-quality video. In this buying guide we’ve rounded up all the current interchangeable lens cameras costing under $1000 and recommended the best.

Canon's EOS R, the company's first full-frame mirrorless camera, impresses us with its image quality and color rendition. But it also comes with quirky ergonomics, uninspiring video features and a number of other shortcomings. Read our full review to see how the EOS R stacks up in today's full-frame mirrorless market.

We spoke to wildfire photographer Stuart Palley about his experiences shooting the recent Woolsey fire, why the Nikon Z7 isn't quite ready to take a permanent spot in his gear bag, and 'that' Tweet from Donald Trump.

The Z7 presented Nikon with a stiff challenge: how to build a mirrorless camera that measures up to its own DSLRs and can deliver a familiar experience to Nikon users. Chris and Jordan tell us whether they think Nikon succeeded.

Nikon has released firmware version 1.02 that resolves a flickering issue when scrolling through images, an ISO limitation problem, and an occasional crash that could occur when displaying certain Raw files.

The Insta360 One X is the company's latest consumer 360-degree camera, supporting 5.7K video, including excellent image stabilization, as well as 18MP photos. And, in our experience, it's a really fun camera to use.