David Maus <address@hidden> writes:
> Sebastian Rose wrote:
>>David Maus <address@hidden> writes:
>>>> sh$ man utf-8
>>>
>>> Thanks! I finally get a grip on one of my personal nightmares.
>
>
>>It's not that bad, is it? :D
>
> Even better: It makes sense ;)
>
>>> The attached patch is the first step in this direction: It modifies
>>> the algorithm of `org-link-escape', now iterating over the input
>>> string with `mapconcat' and escaping all characters in the escape
>>> table or are between 127 and 255.
>
>>Between 128 (1000 0000) and 255 ??
>
>>The binary representation of 127 is 0111 1111 and valid ascii char. DEL
>>actually (sh$ man ascii)
>
> Right, and that's why it is encoded: No control characters in a URI.
>
> The final algorithm for the shiny new unicode aware percent encoding
> function would be:
>
> - percent encode all characters in TABLE
> - percent encode all characters below 32 and above 126
> - encode the char in utf-8
> - percent escape all bytes of the encoded char
>
> The remaining problem is keeping backward compatibility. There are Org
> files out there where "á" is encoded as "%E1" and not "%C3A1". The
> percent decoding function should be able to recognize these old
> escapes and return the right value.
There is no chance to do it in a secure way. But here's what's
possible.
These all work as expected:
(org-protocol-unhex-string "%E1") ; á
(org-protocol-unhex-string "%A1") ; ¡
(org-protocol-unhex-string "%E1%A1") ; á¡
(org-protocol-unhex-string "%C3%B6") ; still german ö
Also, capturing text from this page still works:
http://www.jnto.go.jp/jpn/