It came up in the context of rigid bodies that certain optimizers
(namely CG) like to have all their parameters vary over a similar
scale. As a result, it can be important to rescale non-xyz
attributes. One way of doing this would be to add a method to
Model: FloatPair get_range(FloatKey) which returns the current
range of values for a particular FloatKey. Then CG could use that
to rescale all its values internally.

something like that would be useful indeed. the typical changes of
the values would be of importance for optimizers - the absolute
values are not really important. thus, this function would somehow
need to accumulate knowledge on the magnitude of changes for the
variables throughout optimization.

I was thinking to use the relative magnitudes as a proxy for the
relative changes, but that doesn't always make sense. However, for the
rigid body case they probably do a good enough job: that is, for
computing a structure from a random conformation x,y,z and the angles
will go through their whole range, and if you are just refining the
structure, and have reasonable sized rigid bodies, they will both go
through a small, roughly comparable, fraction of their range. This
does not hold if we are using internal coordinates for a protein or
similar non-xyz variables.

Tracking changes is problematic as they are highly variable when you
use something like CG and past behavior does not predict the future
well (since things settle down). I was trying to use the changes over
the last few steps to predict how much slack to add in the non-bonded
list, and that was just a mess.

So a refined suggestion:

Model would have a parameter called scale per optimizeable attribute.
This scale is by default the width of the range of values exhibited
for that attribute, but can be set to anything by the user. Optimizers
can use this scale as a hint to improve performance (that is, it is
non-binding).