Andrzej Kozlowski wrote:
> [...]
>
> I realize now (I think I noticed this already one a long time ago but
> forgot) that this way of using LLL seems rather different from the one
> I once learned and tried to (roughly ) describe in this thread. In
> particular, the approach to "recognizing algebraics" you describe only
> uses lattice reduction for the standard euclidean inner product (or
> the standard norm) whereas the approach that I had in mind (described,
> for example, in Henri Cohen's "Course in Computational Number Theory",
> page 100), uses lattice reduction for a specially constructed
> positive definite quadratic form. The two approaches appear to be
> equivalent but I can't at this time clarify the relationship between
> them. Is there something obvious that I am missing?
Yes, the fact that I had multiplied the number by a large constant
before rounding. That constant (10^16, in my example) was effectively
the square root of the N in Cohen's exposition. That is to say, his
inner product is given by the diagonal matrix with main diagonal
{1,1,...,1,sqrt(N)}.
> Furthermore, I have notice that
>
> Developer`SystemOptions["LatticeReduceOptions"]
> {"LatticeReduceOptions" ->
> {"LatticeReduceArithmetic" -> Integers,
> "LatticeReduceInnerProduct" -> Automatic,
> "LatticeReduceRatioParameter" -> Automatic}}
>
> makes it possible to use a non-standard inner product in
> LatticeReduction. Presumably a custom inner product should be
> specified as a pure function?
Yes. Example below.
The reason this option is hidden as a system option and not in
LatticeReduce is that the code has not been made bullet proof against
bad inner products e.g. ones that do not correspond to symmetric
positive definite matrices.
Here is a quick example of usage.
lat = {{1,0,125123},{0,1,51253}};
Reduce this using thedefault Automatic setting.
In[8]:= redlat = LatticeReduce[lat]
Out[8]= {{34, -83, 183}, {213, -520, -361}}
Recall that lat is in Hermite normal form. Suppose we started with
redlat, and wished to recover the HNF using lattice reduction. We can do
this by weighting the inner product so that earlier columns count for
more than later ones. This is in effect a simple diagonal inner product
matrix. I implement it using apprpriate rescaling of one vector.
In[17]:= SetSystemOptions["LatticeReduceOptions"->
{"LatticeReduceInnerProduct"->(({2^100,2^50,1}*#1).#2&)}];
In[18]:= lat2 = LatticeReduce[redlat]
Out[18]= {{0, 1, 51253}, {1, 0, 125123}}
> I guess that should make it possible to implement the alternative
> approach described in Cohen's book?. (Of course if they are clearly
> equivalent there would be not much point doing this but if is not
> obvious it might be interesting to compare the answers one gets from
> each of them).
>
> Andrzej Kozlowski