Of course, my final question (and challenge for you) is: What is your solution? Have you
found a better one…? :)

See you next time –
Bye-bye!

/Dee

P.S. I almost forgot to tell you about a much simpler, a no-brainer
solution! A solution which works, but it’s quite ugly…

Here it is: You simply execute a chained REPLACE command (using multiple SQLs or
just a big-big-big one), using as many times as you want the most common used
English words you choose to hard-code.

For example, you could do:

SELECT 'John Doe is the man. He is better, taller and stronger!'
INTO :input_txt;

SELECT :input_txt INTO :output_txt;

SELECT REPLACE(:output_txt, '. ', '^0') INTO :output_txt;

SELECT REPLACE(:output_txt, ' of ', '^1') INTO :output_txt;

SELECT REPLACE(:output_txt, ' to ', '^2') INTO :output_txt;

SELECT REPLACE(:output_txt, ' is ', '^3') INTO :output_txt;

… and so on ...

This would obviously ensure (some) compression, and the more words you choose, the better compression you have...
...but yes, it’s far-far-far from
looking good or optimal…...but in the end, if you refer strictly to the problem of achieving some level of a text compression, it is probably the simplest solution after all...