You wont get much by thresholding here as the pixel intensity is pretty much the same over the image.
I would advise something that relies on edge detections like Sobel or Canny.
In ImageJ a quick test (Process>Find Edges)

Eventually a machine learning based segmentation like Ilastik, but I am not fully convinced that you can have a fully automated solution.

As a semi-automated solution, active contours could maybe work. You click in the center and the algorithm finds the most homogeneous regions around within a threshold.
You can play with this tool in ImageJ (right clic the icon to set the tolerance)

As an addition to finding the most homogeneous regions @LThomas mentions, a shot in the dark: is the particular area available in different images too; different in another aspect?
I’m thinking of satellite images with a different modality of the same spot: other wavelengths, maybe height map (as this image gives the impression of a terraced landscape), water content, vegetation etc. The suggestion to use different “colours” have been seen more often in the current forum…

@LThomas
I mean that “by giving center coordinates” is “Center of gravity coordinate of the polygon shape is giving. In other words, polygon shape is located in the center of the image. In this case, is it possible to create a method to fill polygon shape using a little pattern from the center coordinate ?”.

Plugins>Filters>Colour Deconvolution, Vectors From ROI, Show Matrices, selecting the ‘hight ground’, the ‘sandpit’ and the ‘tree’ in the top left, then threshold [220,255] colour 2, got me this contour which might not be perfect, but is a start. The Edit>Selection>Convex Hull is just a bit too large.

That is a reflectance image, if colour deconvolution returns some “result” it is by pure chance, but it cannot be scientifically explained. Colour deconvolution expects subtractive colours images. The colours of the sandpit, tree and grass do not mix subtractively.

Interesting @gabriel. Sincerely: does your comment also hold when in the ‘top field’ oat is grown and in the surrounding terraces rice is grown? I mean, the image might reflect (boom, boom) properties of the crop grown.

If one crop reflects ‘more blue’ and another crop reflects ‘more red’, I thought you could span a 2D plane as the colour vectors of the crops have different coordinates, albeit their main component is along the green axis. Likewise the sand pit has an entirely different vector in colour space.
Do I miss the (scientific) point of colour deconvolution?

It still applies because the image is not subtractive colour. Colour deconvolution unmixes the “mixed subtractive colours”. Reflected light does not mix subtractively so no pixel in that image is the result of subtractively mixing other colours.
See here: https://en.wikipedia.org/wiki/Subtractive_color

If you were analysing a printed image or a watercolour or a stained slide, then CD could be used to unmix the inks (if they behaved subtractively) but in the image above that is not the case, so I can’t see that is the appropriate method to use; it cannot be logically explained.

Hi @gabriel, not begging to differ per sé, I like to fathom the subtile differences in the theory behind this. I am aware of the subtractive (ink on paper filtering out wavelengths when reflecting, dyes in tissue filtering out wavelengths from the incident light) vs additive (pixels on a screen adding certain wavelengths, maybe reflecting light is also considered additive?) colour systems.

Am I correct in assuming that you disqualify colour deconvolution as a means to classify crops because you classify a reflectance image as additive instead of subtractive?

As a crop illuminated by white (sun)light absorbs certain wavelengths and reflects others, a crop also acts as a filter, imho, and does not fundamentally differ from a dye in a tissue or ink on paper. It is therefore hard to grasp for me that unmixing can only be done in images that are originating from subtractive and not from an additive(?) colour system, or that crops can’t be described in a subtractive colour system.

And just to get my nomenclature correct: what is the name for the (ImageJ) method/command where vectors in one coordinate system (RGB) can be rewritten into an (orthogonal) different coordinate system (e.g. crops, stained tissues), if each crop or tissue has its distinct RGB properties?

Thanks @gabriel, I’ll just deduce from your arguments, and wholeheartedly agree, that it will never be possible to obtain a quantitative result from the image as ment in the Beer-Lambert sense. Therefore I’m the first to drop the ‘quantitative’ from the exchange of views, if ever it was part of it.

Let’s see if others can chip in on the vector-rewriting part of the (im)possibility to qualitatively classify crops/parts of the image.