I have a bunch of files from log1 to log164.
I'm trying to LIST the directory (sorted) in a UNIX terminal but the sort functions are only providing the format like this:
home:logs Home$ ls -1 | sort
...

I had a command which would work through a text file, count all the occurrences of the words and print it out like this:
remy@box $˜ magic-command-i-forgot | with grep | and awk | sort ./textfile.txt
...

I am getting output from a program that first produces one line that is a bunch of column headers, and then a bunch of lines of data. I want to cut various columns of this output and view it sorted ...

I'm trying to sort some data using sort. I noticed it was sorting by digit rather than number, so I added the -n flag. It then seemingly only numerically sorts on the first field though. Breaking it ...

I have Apache logfile, access.log, how to count number of line occurrence in that file? for example the result of cut -f 7 -d ' ' | cut -d '?' -f 1 | tr '[:upper:]' '[:lower:]' is
a.php
b.php
a.php
...

Everywhere I see someone needing to get a sorted, unique list, they always pipe to sort | uniq. I've never seen any examples where someone uses sort -u instead. Why not? What's the difference, and why ...

How can I compare and print data from different text files to one in Shell.
I have captures NAS details of three different boxes using SSH, now I need to combine all the three text files to one file ...

I'm aware that I can somehow sort this output numerically (so cpu1/ follows cpu0/) ... I could probably get something to work eventually by splitting up the string various ways with awk, but is there ...