Hi Guys...
I am newbie to awk and would like a solution to probably one of the simple practical questions.

I have a test file that goes as:

1,2,3,4,5,6
7,2,3,8,7,6
9,3,5,6,7,3
8,3,1,1,1,1
4,4,2,2,2,2

I would like to know how AWK can get me the distinct values say for eg: on col2 and figure out that the distinct values are only 2,3,4.
I need to check my actual realtime medical file with the distinct service dates from around a relatively big 200,000 records.

And one more question is...how can I use awk to print out records which dont meet a specific criteria...like...
Eg: I want to see only those records where Distinct Col2 values are less than 10 and see the actual distinct values to figure out why they are < 10

I know I can always go for some fancy ETLs to achieve complex requirements(ofcoz this requirement is not complex anyway) and play around with the data but I wanna use the power of awk/sed to accomplish the tasks.

The sort is a prerequisite for uniq which is kind of unfortunate if there is a lot of data.

Using awk,

Code:

awk -F , '{ a[$2]++ } END { for (b in a) { print b } }' file

The array a counts the number of occurrences of each distinct value in the second field. We don't use the actual count of occurrences, just the keys (distinct values in the second field) in the final print, but that's obviously easy to change if you want to see the counts, too.

The array could collect something else than counts; for example, a[$2]=$0 would remember the latest line with a particular value in field $2 for each distinct value in field $2. Collecting more complex data such as all lines with a particular value is doable, but slightly more complex -- you could append to the existing data. But at that point, perhaps just collecting the keys you want, and then doing another round to extract only those records would be more efficient if there is a lot of data.