Tag Info

In ksh93 and zsh, there's a string replacement construct ${VARIABLE//PATTERN/REPLACEMENT}, used twice in the following snippet: once to replace ' by '' and once to replace newlines by '+char(10)+'. If there are no newlines in the input string, you can omit the second assignment command.
quoted_string=\'${raw_string//\'/\'\'}\'
...

This looks like an XY Problem - you say you want to add a break between every two words, but what you really want to do is pretty-print the data returned by an SQL query.
Your problem is caused by the fact that your executeSQLQuery function (or script or program) returns formatted output rather than just the data....and it looks like it is doing that ...

POSIXly you can safely escape any string into one concatenated string for reinput to the shell like:
alias "string=$(cat file)"; alias string
alias will hard-quote its output and prepend (at least) string= to head of the string. bash (in a break with the standard) also adds the string alias to head of the output. Still, you can get an eval-friendly quoted ...

If you only want to escape every double quotation mark and backslash you could use
perl -wpe 's/([\\"])/\\$1/g'
You could also use this with xclip:
cat myfile | perl -wpe 's/([\\"])/\\$1/g' | xclip -selection clipboard

for VARIABLE in 1 2 3 4 5 .. N
do
command1
command2
commandN
done <<anything given here is taken as variable>>
This is the basic syntax and after done, commit should be given in next line.
So your code should be.
#!/bin/bash
names = find /home/devuser -name 'BI*'
sqlplus -s schema_name/passwd << EOF
for name in {names[@]}
...

If your shell script is bash or sh you can try appending the -x switch to the shebang and then run the script. Typically used for debugging shell scripts it will print the next line/command before executing. So if you have the sample script below that logs to file logfile
#!/bin/bash -x
echo "Hello world!" >> logfile
echo "Second command!" >> ...

First, you need to get SQL*Plus to error out if a SQL error occurs. You can do this by adding:
WHENEVER SQLERROR EXIT FAILURE
to your SQL script (probably up top). You can also give different codes (small non-negative integers; non-zero = failure) in place of the word FAILURE.
These will come back to your shell script in $?. So you can then have your ...

At minimum, you need to set ORACLE_HOME. You probably also want to add something to PATH.
When you installed Oracle, you picked a home directory. For example, let's say you used /opt/oracle/oracle/product/11.2.0/dbhome_1. Then you'd run something like this in the shell to set the environment:
export ORACLE_HOME=/opt/oracle/oracle/product/11.2.0/dbhome_1
...

If the queries are predictable enough, maybe you could simply sed out the parameter values--e.g. if many queries contain equality comparison with numbers, sed 's/=[[:digit:]]+//g' would remove all the actual numbers, leaving only the column names.
Otherwise, the only really general solutions I can think of are pattern recognition techniques like k-nearest ...

The problem with the output is that you would like to group three words (though logically key-value pair) in the first line of output, another three in the next and final two in the third line.
For this particular problem, the fast way would be:
executeSQLQuery "$QUERY" | awk '{print $1 " " $2 " " $3 "\n" $4 " " $5 " " $6 "\n" $7 " "$8 }'
But generally ...

While it is technically possible to do date arithmetic in sed, it is not at all the right tool for the job. Use a tool like awk or perl which has integer arithmetic built in.
Your requirement is an unusual one for date manipulations, so you'll need a rich date manipulation library if you don't want to hard-code the date arithmetic. Perl's Date::Manip has ...

I doubt that sed is the right tool for the job, in this case. I think you probably want to use awk, if you're already familiar with awk, otherwise, write a program.
I've known an engineer who used sed and awk to create MSC/NASTRAN input files, which had even stricter requirements than what you mention, but he was quite familiar with the tools, so cryptic ...

For each pattern, you're invoking a new instance of the sqlite program which connects to the database anew. That's a waste. You should build a single query that looks for any of the keys, then execute that one query. Database clients are good at executing large queries.
If the matching lines in the keys file only contain digits, then you can build the query ...

First things first, you really replace the if with a list. Actually I would even replace the [[]]s with []s, and then run in dash or other lighter sh. This even seems simple enough to ditch the entire for, and run with xargs (always my preference, better performance) So for example, maybe something like this ...
grep ^[0-9] keys | xargs -P0 -I '{id}' \
sh ...

This would be trivial:
1) Import the SQL data into an SQL database
2) Output the data in the format you want with any of the SQL tools for doing this that already exist. E.g. SELECT INTO OUTFILE
And that is totally scriptable. If there are speed issues, get faster hardware, especially drives. If you absolutely want to parse this in some other ...