If others could help test this, I'd greatly appreciate the help.
Basically I need people to download a nightly build, wait a day or two,
then run PD and make sure nothing goes horribly wrong.
I've tested this new update implementation across all my PCs, but there
are a ton of variables involved, and I'll feel more comfortable if
others can have a go at it.

It would have been great if PD could handle all patching itself, but
this approach - while cumbersome - does carry some nice extra benefits.
In particular, this will make it easier to roll back any update errors,
and in the future we could also set up a system to automatically restore
the last stable build if PD fails to load (for example).
Right now, this is just a basic framework for the support app, and a few
changes to PD itself to start migrating update features into the
separate update app.

Except for the buttons on left and bottom, this dialog (the most complex
one in PD) is now 100% Unicode-compatible. It looks pretty awesome in
my test CJK language files, too!
The jcButtons on left will disappear soon, and the command buttons on
bottom are just waiting for a proper pdButton replacement (which is
hopefully coming sooner rather than later...)

This is by far the densest collection of combo boxes anywhere in the
project, so I'm quite happy that everything worked on the first try.
(As a nice bonus, the .exe size also decreases every time we switch to
our own controls - yay!)

Another low-risk commit. I've written two pure-VB FFTs this release
cycle. One uses standard cos/sin, and a second one estimates using
sqr(). The estimation method is 10-15% faster by my testing, so not a
huge difference.
The class isn't particularly fast. 8 mb of data can be forward
transformed, than transformed back, in about 2.5 seconds on my old ~2010
desktop. This is usable on image data, but because we have to transform
the image 4x (horizontal + vertical forward, then horizontal + vertical
inverse), we're looking at a minimum of 10 seconds on a typical digital
photo - and because most FFT operations require a transformed kernel as
well, 20 seconds. Pretty slow.
PD doesn't actually make use of this class yet, but I'm uploading it in
case someone smarter than I has ideas for improving it.

Overhauling menu icons is such a massive task, and I don't have it in me
to do it before 6.6's release. So there's going to be a weird interim
where the primary interface has gone flat, while menus retain the old
look, but oh well.
6.8 will continue the long, ugly road toward icon unification and true
high-DPI support.

It's not very often that PD *reduces* translation count, but wiping out
the old update notification form helped here.
Also, I snuck a few code changes in to prevent the addition of unwanted
text to the master en-US file.

- The old update notification form is gone. Yay!
- The new update notification form now links a release announcement URL
(or PD's commit log, if on the nightly build track)
- Manually requested update checks work again
- Old update check code has been purged
- The central update checker now returns a boolean for whether an update
is available; this is helpful when dealing with manual update checks
- Automatic restarts are now disabled in the IDE, as they don't work
there

pdLabel is designed to be as lightweight as possible, so it does not
provide any input handling or events.
pdHyperlink is a slightly heavier replacement that shells a URL
property. Right now, that's all it does; perhaps in the future I'll
extend it to raise a "click" event, as well.

Because pdLabel manages its own translations, we don't need to wrap
caption assignments in g_Language.TranslateMessage() anymore. pdLabel
will take care of that for us. The translation support tool now adds
Caption = assignments to the translation list, which turned up a few
other obscure places I had missed in the past.

So much of 6.6's release has been about UI stuff, so a ton of prototyped
imaging code has built up in my personal collection. This commit is a
low-risk addition I'd like to explore in the coming weeks, so I'm
committing my first working prototype now.
There are many ways to approximate a Gaussian function, and Infinite
Impulse Response (IIR) is one of the better ones. It provides
controllable quality, support for fractional sigma (radii), and is even
capable of an in-place transform, so a second copy of the image isn't
needed.
Memory requirements can be quite high since it requires floating-point
inputs, but I have ideas for working around that. In the meantime, this
current brute-force implementation is still much faster than PD's
current methods, and it has the added benefit of approximating a true
Gaussian much more closely at small radii than an iterative box-blur.
GIMP offers an IIR version of their Gaussian Blur, and it slightly beats
PD's existing offerings
(http://photodemon.org/315/blur-performance-photodemon-gimp-paint-net/)
so it would be nice to regain the lead in this area.
I still need to sort out some boundary handling issues, and alpha isn't
currently handled, but since I intend for this to replace's PD's current
"medium quality" blur offering, now seemed like a good time to get a
working version committed.

Wherever possible, I've tried to tie this to existing menus (as
syncInterfaceToImage applies a lot of heuristics for some menus). The
lone exception is Open in Explorer, but that's easily handled by
checking to see if the image exists on-disk.

As far as I can tell, if ExifTool can't determine a codepage for a given
piece of text (e.g. it's not UTF-8, but not ASCII either, and no obvious
identifier is present), it will simply dump the raw binary data into the
XML output in Base-64 format. This is fairly common with IPTC tags, as
sloppy encoders don't make use of the (very simple!) codepage identifier
feature.
PD now retrieves such Base-64 text and stores it. It also attempts to
generate a value using basic text heuristics. If no well-formed
character set is identified (UTF-8, 16, etc), a fallback conversion will
be performed using the current system codepage. This should provide
"good enough" results for most users, and perhaps at the future I can
look at providing more control over the character set conversion for
text with unknown encodings.

This is separate from UTF-8 embedding (which actually relies on a
separate UTF-8 metadata file), but we'll need to pipe UTF-8 filenames in
the future, so I may as well fix this while I'm thinking of it.

UTF-8 interactions over pipes will require the same encoding heuristics
used by pdFSO, so rather than duplicate the code (or use an FSO instance
for something that's not file-related), I think it makes sense to split
out some Unicode bits into their own class.

I think this is a pretty good solution, all things considered.
- Each button caption is now tested for fit
- Word-wrap is used, if available (if the caption is multiple words).
Otherwise, font-shrinking is activated.
- Shrunk font sizes are cached at resize time, so we don't have to do it
during rendering.
- A secondary font object is used to render text that must be shrunk.
This avoids the need for recreating the primary font object.
- The secondary font objects is only recreated as necessary, which is
advisable as typically there's only one problematic button. Multiple
problematic buttons work just fine, but the "shrunken font" class will
necessarily be recreated for each problematic button (if their font
sizes differ).