My first few attempts at using the content aware fill have been disappointing. I created a panorama and wanted to fill the foreground (misty water) and the sky (cloudy) to the top and bottom of the frame.I selected the transparent pixel areas that needed to be filled with the water and sky, then I expanded the selection 6 pixels, so that PS had some pixels to reference for the filling operation. There wasn't much sky above the tree line, but the content aware fill worked fine in this area. However filling the foreground with the misty water did not. PS insists on referencing pixels well out of my selection and using these as part of the fill. Even after repeatedly undoing and redoing the content aware fill, PS insisted on using pixels well out of the selected pixels to create the new content aware fill. Why is this?

It seems that the only user control over this tool is the selected pixels. Shouldn't this tell PS the pixels you want it to reference for creating the fill? Why does PS insist on going well outside of the selected pixels for a reference. If I wanted it to reference other areas of the photo I would simply include those areas in or very close to the selection. Isn't the tool suppose to use the selection as a guide? How can you tell the tool to use the selected pixels as a reference?

Content aware fill only uses the selection to determine what you want replaced. It will sample the whole image and take what it needs to fill in whats missing. The point of content aware is you don't need to choose where it samples from, it does that automatically. The thing that most people don't get is content aware will not work perfectly in every situation. However it is much better than manually doing the whole entire thing by hand. It's going to save me HOURS every day.

No, selections don't limit the areas where Content Aware Fill takes it's info from. As far as i can tell, it's taking the info from around the selection that you made so try selecting only what you want to replace or do it in multiple steps.

Not so. I have a tiny area on the edge of a photo with the front of a car sticking in. I used it as a test image as cropping it was part of the expectations, so it was ideal as a test image. Well, it left parts of the car, part of the curb and a trace of the bench behind the car...and a hunk of chain link fence from halfway across the page thrown in for good measure! It actually was quite hilarious!

I tried various methods to acquire the selection, all outputting varying degrees of that mixture. Saving time? In this case, should I have chosen to not crop, I could eliminate the car, fix the bench or take it out entirely with healing and the clone tool...maybe 15 min max. Fixing the result of content aware would be longer as I would lose certain reference points immediately in front of the car.

Now if one could predict and control that behavior, it would be a wonderful tool for the world of the bizarre! Especially if that tool could reference parts of another picture, at it's pleasure!

I do like the tool but like the OP, fixing the edges of a panorama is at your peril. Again, doing panos should include enough to account for the crop needed at the conclusion of the stitch.

I've used it a few times and I think it's OK - Like most new features, generally, they tend to get better with future releases.

I'm only using the trial version to date as I'm still experimenting with CS5 and I think it's much improved over cs4. I'm trying to break it to see if it crashes and so far it's worked perfectly, even with a 55ins 3D layer at 200 ppi.....:0 - as soon as the box version hits my local computer shop I going to be a little bit poorer.......it's a worthy upgrade.

I appreciate everyone's comments. It seems that the content aware fill should look at only the areas just outside the selection marquee, since this is the likely area that has content that needs to resemble/match the filled area. But it does not do that and instead references other areas of the image. Why, only the developers could explain. I don't expect miracles, but the logic behind this is hard to understand.

I am glad this new tool is there and think with some experience I will learn when and how to use it in conjunction with the stamp and healing brushes.

I would suggest that once you use a tool such as magnetic lasso or marquis square to define an area:

To replace:

1. Tones with colour (e.g. a scratch on a car, breadcrumbs on a tabletop)

a. The eyedropper be used to sample a colour.

b. Once the colour is defined, then the content aware is used.

2. Pattern with a pattern (e.g. beach sand to replace dead fish and weeds on a beach)

a. Use the rubber stamp to define the pattern that will replace the "corrupt" area

b. Once the pattern is defined, then the content aware is used.

The problem also has to do with Internet reporters or whatever they're called. They all follow each other and make the same comments.

What happens is that all of these NetPorters end up almost verbatum with the same same material. Unfortunately, software netporting is not unique.

The same holds true for stock netporting. Some "expert" in some office off Wallstreet (everyone is moving away from there) writes that Company B23W1X

should make $1.78 a share with net profit for the quarter to be $ 148,000,000. Pretty soon there are 100,000,000 same same articles copied verbatum from

the NET or some other "expert" copied form the first netporter expert , but "adjusted" the data by 1 cent and for the profit 138,000,000. COmpany B23W1X only makes $ 1.77 a share mand only $ 147,999,999.00.

What happens - all the brokerage houses sell.

Unfortunately, what happens with software is that everyone forks over a bundle of money for the "new features" only to find out they don't always work!

Bottom line: wait a few months till the regular public and users start posting.

The same stampede has occurred with Apple's new operating system Lion OS 10.7. Apple sold 1 million of this OS in 1 day!

I waited, and started to read the problems people were having. I'm still considering the new OS.

I worked out right from the start that the trick is to make a rough selection of just the pixels you want to be considered for your CAF and copy them to a new layer. Then select your holes, or the bits you want to remove, and use CAF.

So if you don't have a whole lot of sky and you need to fill a hole in the sky but it keeps being filled with half a tree from the lower part of the image, copy what sky you do have to a new layer, and use CAF on it. Easy peasy.

It seems that the only user control over this tool is the selected pixels. Shouldn't this tell PS the pixels you want it to reference for creating the fill? Why does PS insist on going well outside of the selected pixels for a reference. If I wanted it to reference other areas of the photo I would simply include those areas in or very close to the selection. Isn't the tool suppose to use the selection as a guide? How can you tell the tool to use the selected pixels as a reference?

Most of the disappointment expressed by the posters above seems to be centered on CAF's using source material they didn't want when filling the removed area...

If Content Aware Fill is bringing in things you don't want, that's easily taken care of by just using a layer mask to tell it exactly what you do (or don't) want it to use.

By the way, a little trick that can sometimes help with situations like what you're showing here is to actually use the Clone Tool, set to Lighten (since the blemishes are substantially dark), then clone nearby areas.

A bit more work on the above image with this technique yielded...

Sometimes a complex image does take a bit of elbow grease. Your task, as Photoshop expert, is to find the combination of things that gets you to where you want to go with acceptable quality and at acceptable speed.

There's really no magic. But Content Aware Fill does get close - noting my streetlight removal above, for example.